repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-niu-fi | Helsinki-NLP | marian | 10 | 9 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-niu-fi
* source languages: niu
* target languages: fi
* OPUS readme: [niu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.fi | 24.8 | 0.474 |
| 31b61a78587f86dcc9d21ca14227693f |
Helsinki-NLP/opus-mt-nl-fr | Helsinki-NLP | marian | 10 | 1,752 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 770 | false |
### opus-mt-nl-fr
* source languages: nl
* target languages: fr
* OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.fr | 51.3 | 0.674 |
| b53cba0432e1e1682a4ce5eb6b6b5c9d |
muhtasham/tiny-vanilla-target-glue-cola | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,188 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-vanilla-target-glue-cola
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8745
- Matthews Correlation: 0.0651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6124 | 1.87 | 500 | 0.6204 | 0.0 |
| 0.603 | 3.73 | 1000 | 0.6181 | 0.0 |
| 0.5927 | 5.6 | 1500 | 0.6231 | 0.0194 |
| 0.5707 | 7.46 | 2000 | 0.6366 | 0.0149 |
| 0.5399 | 9.33 | 2500 | 0.6549 | 0.0646 |
| 0.5169 | 11.19 | 3000 | 0.6769 | 0.0736 |
| 0.4954 | 13.06 | 3500 | 0.6856 | 0.0742 |
| 0.4724 | 14.93 | 4000 | 0.7246 | 0.0777 |
| 0.4552 | 16.79 | 4500 | 0.7517 | 0.0767 |
| 0.434 | 18.66 | 5000 | 0.7868 | 0.0832 |
| 0.4192 | 20.52 | 5500 | 0.8144 | 0.0664 |
| 0.4092 | 22.39 | 6000 | 0.8025 | 0.0670 |
| 0.3874 | 24.25 | 6500 | 0.8745 | 0.0651 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 880c3a88ff96362b63504c4fee5970dd |
Devrim/prism-default | Devrim | null | 3 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,533 | false | The default Prism model available at https://github.com/thompsonb/prism. See the [README.md](https://github.com/thompsonb/prism/blob/master/README.md) file for more information.
**LICENCE NOTICE**
```
MIT License
Copyright (c) Brian Thompson
Portions of this software are copied from fairseq (https://github.com/pytorch/fairseq),
which is released under the MIT License and Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
``` | f21b6a454e6d803c5f230456ee03ae20 |
kpriyanshu256/whisper-large-v2-bn-600-32-1e-05 | kpriyanshu256 | whisper | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['as'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,504 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2-Bengali
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss:
- Wer: 12.51
## Model description
This model was originally trained on Bengali data but evaluated on Assamese test split. Hence, tensorboard evaluation logs for Assamese are present. The model was later evaluated on Bengali test split.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0646 | 1.13 | 600 | | 12.51 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 6ea3c00a582b409ff677180786799e7f |
hyamaguchi58/distilbert-base-uncased-finetuned-emotion | hyamaguchi58 | distilbert | 16 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,337 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2138
- Accuracy: 0.923
- F1: 0.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8023 | 1.0 | 250 | 0.3011 | 0.9085 | 0.9070 |
| 0.2414 | 2.0 | 500 | 0.2138 | 0.923 | 0.9229 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.12.1
| dea44fc9d0431828ac041a4664db1a78 |
mroopesh/my_billsum_model | mroopesh | t5 | 14 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['billsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,699 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5391
- Rouge1: 0.1425
- Rouge2: 0.0499
- Rougel: 0.1149
- Rougelsum: 0.1148
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8276 | 0.1256 | 0.0355 | 0.1038 | 0.104 | 19.0 |
| No log | 2.0 | 124 | 2.6220 | 0.1356 | 0.0456 | 0.1106 | 0.1104 | 19.0 |
| No log | 3.0 | 186 | 2.5555 | 0.1423 | 0.0501 | 0.1145 | 0.1143 | 19.0 |
| No log | 4.0 | 248 | 2.5391 | 0.1425 | 0.0499 | 0.1149 | 0.1148 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 5b8f0ac0077b7a7f5df91cff9cbc97f7 |
jonatasgrosman/exp_w2v2r_de_xls-r_accent_germany-0_austria-10_s673 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'de'] | false | true | true | 481 | false | # exp_w2v2r_de_xls-r_accent_germany-0_austria-10_s673
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| f24bf3e123934d0167efd4aaf73f24f5 |
responsibility-framing/predict-perception-xlmr-focus-assassin | responsibility-framing | xlm-roberta | 12 | 21 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,990 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-assassin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3264
- Rmse: 0.9437
- Rmse Focus::a Sull'assassino: 0.9437
- Mae: 0.7093
- Mae Focus::a Sull'assassino: 0.7093
- R2: 0.6145
- R2 Focus::a Sull'assassino: 0.6145
- Cos: 0.7391
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6131
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0403 | 1.0 | 15 | 1.1576 | 1.7771 | 1.7771 | 1.6028 | 1.6028 | -0.3670 | -0.3670 | -0.2174 | 0.0 | 0.5 | 0.2379 | nan |
| 0.9818 | 2.0 | 30 | 0.8916 | 1.5596 | 1.5596 | 1.4136 | 1.4136 | -0.0529 | -0.0529 | 0.3913 | 0.0 | 0.5 | 0.3793 | nan |
| 0.9276 | 3.0 | 45 | 0.9277 | 1.5909 | 1.5909 | 1.4560 | 1.4560 | -0.0955 | -0.0955 | 0.3913 | 0.0 | 0.5 | 0.3742 | nan |
| 0.8395 | 4.0 | 60 | 0.7958 | 1.4734 | 1.4734 | 1.3032 | 1.3032 | 0.0603 | 0.0603 | 0.5652 | 0.0 | 0.5 | 0.4598 | nan |
| 0.7587 | 5.0 | 75 | 0.4647 | 1.1259 | 1.1259 | 0.9316 | 0.9316 | 0.4513 | 0.4513 | 0.6522 | 0.0 | 0.5 | 0.5087 | nan |
| 0.696 | 6.0 | 90 | 0.5368 | 1.2101 | 1.2101 | 1.0847 | 1.0847 | 0.3661 | 0.3661 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan |
| 0.548 | 7.0 | 105 | 0.3110 | 0.9211 | 0.9211 | 0.7896 | 0.7896 | 0.6328 | 0.6328 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.4371 | 8.0 | 120 | 0.3392 | 0.9619 | 0.9619 | 0.8132 | 0.8132 | 0.5995 | 0.5995 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.355 | 9.0 | 135 | 0.3938 | 1.0366 | 1.0366 | 0.8153 | 0.8153 | 0.5349 | 0.5349 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2919 | 10.0 | 150 | 0.3484 | 0.9749 | 0.9749 | 0.7487 | 0.7487 | 0.5886 | 0.5886 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2595 | 11.0 | 165 | 0.2812 | 0.8759 | 0.8759 | 0.6265 | 0.6265 | 0.6679 | 0.6679 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2368 | 12.0 | 180 | 0.2534 | 0.8314 | 0.8314 | 0.6402 | 0.6402 | 0.7008 | 0.7008 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.227 | 13.0 | 195 | 0.2878 | 0.8861 | 0.8861 | 0.6769 | 0.6769 | 0.6601 | 0.6601 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1979 | 14.0 | 210 | 0.2405 | 0.8100 | 0.8100 | 0.6113 | 0.6113 | 0.7160 | 0.7160 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1622 | 15.0 | 225 | 0.2575 | 0.8382 | 0.8382 | 0.6017 | 0.6017 | 0.6959 | 0.6959 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1575 | 16.0 | 240 | 0.2945 | 0.8963 | 0.8963 | 0.6741 | 0.6741 | 0.6523 | 0.6523 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1479 | 17.0 | 255 | 0.3563 | 0.9859 | 0.9859 | 0.7367 | 0.7367 | 0.5792 | 0.5792 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1269 | 18.0 | 270 | 0.2806 | 0.8750 | 0.8750 | 0.6665 | 0.6665 | 0.6686 | 0.6686 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1257 | 19.0 | 285 | 0.3267 | 0.9441 | 0.9441 | 0.6739 | 0.6739 | 0.6142 | 0.6142 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.134 | 20.0 | 300 | 0.3780 | 1.0155 | 1.0155 | 0.7331 | 0.7331 | 0.5536 | 0.5536 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan |
| 0.1171 | 21.0 | 315 | 0.3890 | 1.0301 | 1.0301 | 0.7444 | 0.7444 | 0.5406 | 0.5406 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0934 | 22.0 | 330 | 0.3131 | 0.9242 | 0.9242 | 0.6923 | 0.6923 | 0.6303 | 0.6303 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1112 | 23.0 | 345 | 0.2912 | 0.8913 | 0.8913 | 0.6610 | 0.6610 | 0.6561 | 0.6561 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1038 | 24.0 | 360 | 0.3109 | 0.9209 | 0.9209 | 0.7019 | 0.7019 | 0.6329 | 0.6329 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.085 | 25.0 | 375 | 0.3469 | 0.9728 | 0.9728 | 0.7383 | 0.7383 | 0.5904 | 0.5904 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0843 | 26.0 | 390 | 0.3017 | 0.9073 | 0.9073 | 0.6848 | 0.6848 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.093 | 27.0 | 405 | 0.3269 | 0.9443 | 0.9443 | 0.7042 | 0.7042 | 0.6140 | 0.6140 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0846 | 28.0 | 420 | 0.3161 | 0.9286 | 0.9286 | 0.6937 | 0.6937 | 0.6267 | 0.6267 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0764 | 29.0 | 435 | 0.3244 | 0.9408 | 0.9408 | 0.7079 | 0.7079 | 0.6169 | 0.6169 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0697 | 30.0 | 450 | 0.3264 | 0.9437 | 0.9437 | 0.7093 | 0.7093 | 0.6145 | 0.6145 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| f0689afbc87d314db734861f12de73fb |
fathyshalab/domain_transfer_clinic_credit_cards-massive_calendar-roberta-large-v1-2-88 | fathyshalab | roberta | 14 | 0 | sentence-transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | true | true | 1,538 | false |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_calendar-roberta-large-v1-2-88
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_calendar-roberta-large-v1-2-88")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| a5d7397f5b3331369e5e0b49360f1527 |
nandysoham16/9-clustered_aug | nandysoham16 | distilbert | 8 | 0 | keras | 0 | null | false | true | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,306 | false |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
['Republic_of_the_Congo', 'Dutch_Republic', 'Southern_Europe', 'Nanjing', 'Southampton', 'Guinea-Bissau', 'Zhejiang', 'New_Delhi', 'Hanover', 'Everton_F.C.', 'Lancashire', 'FC_Barcelona', 'Valencia', 'Antarctica', 'Eritrea', 'Israel', 'Arsenal_F.C.', 'Greece', 'East_India_Company', 'Southeast_Asia', 'Myanmar', 'Switzerland', 'Mali', 'Nigeria', 'Utrecht', 'Alsace', 'Galicia_(Spain)', 'Sichuan', 'Sahara', 'Tibet', 'Strasbourg', 'Central_African_Republic', 'Estonia', 'Paris', 'States_of_Germany', 'East_Prussia', 'Rajasthan', 'Cyprus', 'Hyderabad', 'Armenia', 'Iran', 'Liberia', 'Swaziland', 'Airport', 'Namibia', 'Egypt', 'Portugal', 'Tajikistan', 'Punjab,_Pakistan', 'Kathmandu']
- **Developed by:** nandysoham
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
| ab221b49418e3f3825c64faf832ec757 |
SRM47/gpt2-large-paraphraser | SRM47 | gpt2 | 13 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 908 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-paraphraser
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 5ee329a8a940baf46687bfe88be23f81 |
SRM47/gpt2-paraphraser | SRM47 | gpt2 | 13 | 3 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 890 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-paraphraser
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| ead4ea87b27e1c5a7d2a996d63d7b7b0 |
jonatasgrosman/exp_w2v2r_en_xls-r_age_teens-10_sixties-0_s364 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 476 | false | # exp_w2v2r_en_xls-r_age_teens-10_sixties-0_s364
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 8967e769b92e994ec8f41fde41e55df1 |
swardiantara/distilbert-base-cased-finetuned-ner | swardiantara | distilbert | 13 | 17 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,549 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0709
- Precision: 0.9170
- Recall: 0.9272
- F1: 0.9221
- Accuracy: 0.9804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2732 | 1.0 | 878 | 0.0916 | 0.8931 | 0.8961 | 0.8946 | 0.9736 |
| 0.0717 | 2.0 | 1756 | 0.0726 | 0.9166 | 0.9212 | 0.9189 | 0.9794 |
| 0.0364 | 3.0 | 2634 | 0.0709 | 0.9170 | 0.9272 | 0.9221 | 0.9804 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
| b890fc31090f8df12450567c1d94e17b |
Lvxue/distilled-mt5-small-0.005-1 | Lvxue | mt5 | 14 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en', 'ro'] | ['wmt16'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,038 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-0.005-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8123
- Bleu: 7.6523
- Gen Len: 44.3867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| e3d255e41f1a3c0d848141811f4d4571 |
OscarHoekstra/distilbert-base-uncased-finetuned-ner | OscarHoekstra | distilbert | 13 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,555 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0586
- Precision: 0.9275
- Recall: 0.9371
- F1: 0.9323
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2391 | 1.0 | 878 | 0.0706 | 0.9189 | 0.9230 | 0.9210 | 0.9813 |
| 0.054 | 2.0 | 1756 | 0.0576 | 0.9291 | 0.9370 | 0.9331 | 0.9841 |
| 0.0298 | 3.0 | 2634 | 0.0586 | 0.9275 | 0.9371 | 0.9323 | 0.9841 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| b206812d4b927d2f1ad7742b0c78f1a6 |
jeapaul/wav2vec2-base-torgo-demo-m04-nolm | jeapaul | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,186 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-torgo-demo-m04-nolm
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5735
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.431 | 0.88 | 500 | 4.5567 | 1.0 |
| 3.4727 | 1.75 | 1000 | 3.5626 | 1.0 |
| 3.3879 | 2.63 | 1500 | 3.9274 | 1.0 |
| 3.3513 | 3.5 | 2000 | 3.4813 | 1.0 |
| 3.3538 | 4.38 | 2500 | 3.7300 | 1.0 |
| 3.3539 | 5.25 | 3000 | 3.5714 | 1.0 |
| 3.339 | 6.13 | 3500 | 3.6732 | 1.0 |
| 3.3038 | 7.01 | 4000 | 3.6788 | 1.0 |
| 3.35 | 7.88 | 4500 | 3.6715 | 1.0 |
| 3.338 | 8.76 | 5000 | 3.5161 | 1.0 |
| 3.3306 | 9.63 | 5500 | 3.7386 | 1.0 |
| 3.3266 | 10.51 | 6000 | 3.4908 | 1.0 |
| 3.3184 | 11.38 | 6500 | 3.7669 | 1.0 |
| 3.3189 | 12.26 | 7000 | 3.6142 | 1.0 |
| 3.331 | 13.13 | 7500 | 3.5619 | 1.0 |
| 3.3139 | 14.01 | 8000 | 3.6632 | 1.0 |
| 3.3069 | 14.89 | 8500 | 3.6127 | 1.0 |
| 3.315 | 15.76 | 9000 | 3.5562 | 1.0 |
| 3.3079 | 16.64 | 9500 | 3.7094 | 1.0 |
| 3.3077 | 17.51 | 10000 | 3.5412 | 1.0 |
| 3.3188 | 18.39 | 10500 | 3.6303 | 1.0 |
| 3.3133 | 19.26 | 11000 | 3.5704 | 1.0 |
| 3.3428 | 20.14 | 11500 | 3.5662 | 1.0 |
| 3.3082 | 21.02 | 12000 | 3.6084 | 1.0 |
| 3.3238 | 21.89 | 12500 | 3.6164 | 1.0 |
| 3.3119 | 22.77 | 13000 | 3.5787 | 1.0 |
| 3.2981 | 23.64 | 13500 | 3.6356 | 1.0 |
| 3.3153 | 24.52 | 14000 | 3.5726 | 1.0 |
| 3.3065 | 25.39 | 14500 | 3.5908 | 1.0 |
| 3.3199 | 26.27 | 15000 | 3.5823 | 1.0 |
| 3.306 | 27.15 | 15500 | 3.5658 | 1.0 |
| 3.3153 | 28.02 | 16000 | 3.5818 | 1.0 |
| 3.2762 | 28.9 | 16500 | 3.5810 | 1.0 |
| 3.3196 | 29.77 | 17000 | 3.5735 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.13.2
| 067d02bdc756a10c5b196d20849285fc |
Helsinki-NLP/opus-mt-sv-bcl | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-sv-bcl
* source languages: sv
* target languages: bcl
* OPUS readme: [sv-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bcl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bcl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bcl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bcl | 39.5 | 0.607 |
| d944a080ec8f3166297cf694a0118331 |
michojan/bert-finetuned-ner | michojan | bert | 12 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0622
- Precision: 0.9324
- Recall: 0.9495
- F1: 0.9409
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0862 | 1.0 | 1756 | 0.0649 | 0.9193 | 0.9371 | 0.9281 | 0.9831 |
| 0.0406 | 2.0 | 3512 | 0.0576 | 0.9235 | 0.9472 | 0.9352 | 0.9850 |
| 0.0197 | 3.0 | 5268 | 0.0622 | 0.9324 | 0.9495 | 0.9409 | 0.9864 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 8533e190887bfb9f8ab0a2fa02dbd14c |
anuragshas/whisper-large-v2-ml | anuragshas | whisper | 23 | 1 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ml'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,322 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Malayalam
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 ml dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4170
- Wer: 25.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0 | 71.01 | 1000 | 0.4170 | 25.4789 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 245bad8cc3231a12b42b7f394f982455 |
gagan3012/wav2vec2-xlsr-khmer | gagan3012 | wav2vec2 | 13 | 9 | transformers | 1 | automatic-speech-recognition | true | false | true | apache-2.0 | ['km'] | ['OpenSLR', 'common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 4,993 | false |
# Wav2Vec2-Large-XLSR-53-khmer
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Khmer using the [Common Voice](https://huggingface.co/datasets/common_voice), and [OpenSLR Kh](http://www.openslr.org/42/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
!wget https://www.openslr.org/resources/42/km_kh_male.zip
!unzip km_kh_male.zip
!ls km_kh_male
colnames=['path','sentence']
df = pd.read_csv('/content/km_kh_male/line_index.tsv',sep='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t',header=None,names = colnames)
df['path'] = '/content/km_kh_male/wavs/'+df['path'] +'.wav'
train, test = train_test_split(df, test_size=0.1)
test.to_csv('/content/km_kh_male/line_index_test.csv')
test_dataset = load_dataset('csv', data_files='/content/km_kh_male/line_index_test.csv',split = 'train')
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-nepali")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
#### Result
Prediction: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ']
Reference: ['पारानाको ब्राजिली राज्यमा रहेको राजधानी', 'देवराज जोशी त्रिभुवन विश्वविद्यालयबाट शिक्षाशास्त्रमा स्नातक हुनुहुन्छ']
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from sklearn.model_selection import train_test_split
import pandas as pd
from datasets import load_dataset
!wget https://www.openslr.org/resources/42/km_kh_male.zip
!unzip km_kh_male.zip
!ls km_kh_male
colnames=['path','sentence']
df = pd.read_csv('/content/km_kh_male/line_index.tsv',sep='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t',header=None,names = colnames)
df['path'] = '/content/km_kh_male/wavs/'+df['path'] +'.wav'
train, test = train_test_split(df, test_size=0.1)
test.to_csv('/content/km_kh_male/line_index_test.csv')
test_dataset = load_dataset('csv', data_files='/content/km_kh_male/line_index_test.csv',split = 'train')
wer = load_metric("wer")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-khmer")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-khmer")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
cer = load_metric("cer")
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"])))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["text"])))
```
**Test Result**: 24.96 %
WER: 24.962519
CER: 6.950925
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1yo_OTMH8FHQrAKCkKdQGMqpkj-kFhS_2?usp=sharing) | 1003dd4006dd171238f3ac8ddd5e84e5 |
saikatc/NatGen | saikatc | t5 | 9 | 52 | transformers | 0 | text2text-generation | true | false | false | mit | ['code'] | ['CodeSearchNet', 'CodeXGLUE'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['code generation', 'code translation', 'bug fixing'] | false | true | true | 1,432 | false |
Pretrained model for NatGen: Generative Pre-training by “Naturalizing” Source Code [[`paper`]](https://dl.acm.org/doi/abs/10.1145/3540250.3549162),[[`code`]](https://github.com/saikat107/NatGen),[[`slide`]](https://docs.google.com/presentation/d/1T6kjiohAAR1YvcNvTASR94HptA3xHGCl/edit?usp=sharing&ouid=111755026725574085503&rtpof=true&sd=true).
To load the model,
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("saikatc/NatGen")
model = AutoModelForSeq2SeqLM.from_pretrained("saikatc/NatGen")
```
For citation,
```
@inproceedings{chakraborty2022natgen,
author = {Chakraborty, Saikat and Ahmed, Toufique and Ding, Yangruibo and Devanbu, Premkumar T. and Ray, Baishakhi},
title = {NatGen: Generative Pre-Training by “Naturalizing” Source Code},
year = {2022},
isbn = {9781450394130},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3540250.3549162},
doi = {10.1145/3540250.3549162},
booktitle = {Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
pages = {18–30},
numpages = {13},
keywords = {Neural Network, Semantic Preserving Transformation, Source Code Transformer, Source Code Pre-training},
location = {Singapore, Singapore},
series = {ESEC/FSE 2022}
}
```
| 3e199f7d658f90f15dd0097606648223 |
yerevann/x-r-hy | yerevann | wav2vec2 | 11 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,403 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-2b-armenian-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5166
- Wer: 0.7397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.7057 | 2.38 | 200 | 0.7731 | 0.8091 |
| 0.5797 | 4.76 | 400 | 0.8279 | 0.7804 |
| 0.4341 | 7.14 | 600 | 1.0343 | 0.8285 |
| 0.3135 | 9.52 | 800 | 1.0551 | 0.8066 |
| 0.2409 | 11.9 | 1000 | 1.0686 | 0.7897 |
| 0.1998 | 14.29 | 1200 | 1.1329 | 0.7766 |
| 0.1729 | 16.67 | 1400 | 1.3234 | 0.8567 |
| 0.1533 | 19.05 | 1600 | 1.2432 | 0.8160 |
| 0.1354 | 21.43 | 1800 | 1.2780 | 0.7954 |
| 0.12 | 23.81 | 2000 | 1.2228 | 0.8054 |
| 0.1175 | 26.19 | 2200 | 1.3484 | 0.8129 |
| 0.1141 | 28.57 | 2400 | 1.2881 | 0.9130 |
| 0.1053 | 30.95 | 2600 | 1.1972 | 0.7910 |
| 0.0954 | 33.33 | 2800 | 1.3702 | 0.8048 |
| 0.0842 | 35.71 | 3000 | 1.3963 | 0.7960 |
| 0.0793 | 38.1 | 3200 | 1.4690 | 0.7991 |
| 0.0707 | 40.48 | 3400 | 1.5045 | 0.8085 |
| 0.0745 | 42.86 | 3600 | 1.4749 | 0.8004 |
| 0.0693 | 45.24 | 3800 | 1.5047 | 0.7960 |
| 0.0646 | 47.62 | 4000 | 1.4216 | 0.7997 |
| 0.0555 | 50.0 | 4200 | 1.4676 | 0.8029 |
| 0.056 | 52.38 | 4400 | 1.4273 | 0.8104 |
| 0.0465 | 54.76 | 4600 | 1.3999 | 0.7841 |
| 0.046 | 57.14 | 4800 | 1.6130 | 0.8473 |
| 0.0404 | 59.52 | 5000 | 1.5586 | 0.7841 |
| 0.0403 | 61.9 | 5200 | 1.3959 | 0.7653 |
| 0.0404 | 64.29 | 5400 | 1.5318 | 0.8041 |
| 0.0365 | 66.67 | 5600 | 1.5300 | 0.7854 |
| 0.0338 | 69.05 | 5800 | 1.5051 | 0.7885 |
| 0.0307 | 71.43 | 6000 | 1.5647 | 0.7935 |
| 0.0235 | 73.81 | 6200 | 1.4919 | 0.8154 |
| 0.0268 | 76.19 | 6400 | 1.5259 | 0.8060 |
| 0.0275 | 78.57 | 6600 | 1.3985 | 0.7897 |
| 0.022 | 80.95 | 6800 | 1.5515 | 0.8154 |
| 0.017 | 83.33 | 7000 | 1.5737 | 0.7647 |
| 0.0205 | 85.71 | 7200 | 1.4876 | 0.7572 |
| 0.0174 | 88.1 | 7400 | 1.6331 | 0.7829 |
| 0.0188 | 90.48 | 7600 | 1.5108 | 0.7685 |
| 0.0134 | 92.86 | 7800 | 1.7125 | 0.7866 |
| 0.0125 | 95.24 | 8000 | 1.6042 | 0.7635 |
| 0.0133 | 97.62 | 8200 | 1.4608 | 0.7478 |
| 0.0272 | 100.0 | 8400 | 1.4784 | 0.7309 |
| 0.0133 | 102.38 | 8600 | 1.4471 | 0.7459 |
| 0.0094 | 104.76 | 8800 | 1.4852 | 0.7272 |
| 0.0103 | 107.14 | 9000 | 1.5679 | 0.7409 |
| 0.0088 | 109.52 | 9200 | 1.5090 | 0.7309 |
| 0.0077 | 111.9 | 9400 | 1.4994 | 0.7290 |
| 0.0068 | 114.29 | 9600 | 1.5008 | 0.7340 |
| 0.0054 | 116.67 | 9800 | 1.5166 | 0.7390 |
| 0.0052 | 119.05 | 10000 | 1.5166 | 0.7397 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 9742711e9d3780e071320d4398e62ee6 |
NbAiLab/nb-wav2vec2-300m-nynorsk | NbAiLab | wav2vec2 | 31 | 23 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['nn-NO'] | ['NbAiLab/NPSC'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'NbAiLab/NPSC', False, 'nn', 'nn-NO'] | true | true | true | 5,245 | false |
# Norwegian Wav2Vec2 Model - 300M - VoxRex - Nynorsk
This model is finetuned on top of feature extractor [VoxRex-model](https://huggingface.co/KBLab/wav2vec2-large-voxrex) from the National Library of Sweden. The finetuned model achieves the following results on the test set with a 5-gram KenLM. The numbers in parentheses are the results without the language model:
- **WER: 0.1222** (0.1537)
- **CER: 0.0419** (0.0468)
## Model description
This is one of several Wav2Vec-models our team created during the 🤗 hosted [Robust Speech Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614?s=09). This is the complete list of our models and their final scores:
| Model | Final WER | |
|:--------------|:------------|:------------:|
| [NbAiLab/nb-wav2vec2-1b-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-1b-bokmaal) | 6.33 | |
| [NbAiLab/nb-wav2vec2-300m-bokmaal](https://huggingface.co/NbAiLab/nb-wav2vec2-300m-bokmaal) | 7.03 | |
| NbAiLab/nb-wav2vec2-300m-nynorsk (this model) | 12.22 | |
### Dataset
In parallel with the event, the team also converted the [Norwegian Parliamentary Speech Corpus (NPSC)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/) to the [NbAiLab/NPSC](https://huggingface.co/datasets/NbAiLab/NPSC) in 🤗 Dataset format and used that as the main source for training.
## Code
We have released all the code developed during the event so that the Norwegian NLP community can build upon it when developing even better Norwegian ASR models. The finetuning of these models is not very computationally demanding. After following the instructions here, you should be able to train your own automatic speech recognition system in less than a day with an average GPU.
## Team
The following people contributed to building this model: Rolv-Arild Braaten, Per Egil Kummervold, Andre Kåsen, Javier de la Rosa, Per Erik Solberg, and Freddy Wetjen.
## Training procedure
To reproduce these results, we strongly recommend that you follow the [instructions from 🤗](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running these will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
### Language Model
As the scores indicate, adding even a simple 5-gram language will improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) explaining how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
### Parameters
The final model was run using these parameters:
```
--dataset_name="NbAiLab/NPSC"
--model_name_or_path="KBLab/wav2vec2-large-voxrex"
--dataset_config_name="16K_mp3_nynorsk"
--output_dir="./"
--overwrite_output_dir
--num_train_epochs="80"
--per_device_train_batch_size="16"
--per_device_eval_batch_size="16"
--gradient_accumulation_steps="2"
--learning_rate="1e-4"
--warmup_steps="2000"
--length_column_name="input_length"
--evaluation_strategy="steps"
--text_column_name="text"
--save_steps="500"
--eval_steps="500"
--logging_steps="100"
--layerdrop="0.041"
--attention_dropout="0.094"
--activation_dropout="0.055"
--hidden_dropout="0.047"
--save_total_limit="3"
--freeze_feature_encoder
--feat_proj_dropout="0.04"
--mask_time_prob="0.082"
--mask_time_length="10"
--mask_feature_prob="0.25"
--mask_feature_length="64"
--gradient_checkpointing
--min_duration_in_seconds="0.5"
--max_duration_in_seconds="30.0"
--use_auth_token
--seed="42"
--fp16
--group_by_length
--do_train --do_eval
--push_to_hub
--preprocessing_num_workers="32"
```
Using these settings, the training might take 3-4 days on an average GPU. You can, however, get a decent model and faster results by tweaking these parameters.
| Parameter| Comment |
|:-------------|:-----|
| per_device_train_batch_size | Adjust this to the maximum of available memory. 16 or 24 might be good settings depending on your system |
|gradient_accumulation_steps |Can be adjusted even further up to increase batch size and speed up training without running into memory issues |
| learning_rate|Can be increased, maybe as high as 1e-4. Speeds up training but might add instability |
| epochs| Can be decreased significantly. This is a huge dataset and you might get a decent result already after a couple of epochs|
| cdbc80cc45d936b06567ec67e9fba857 |
ALM/whisper-it-small | ALM | whisper | 22 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['it'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,372 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Italian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2517
- Wer: 11.2714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2529 | 1.0 | 466 | 0.2758 | 12.4907 |
| 0.1711 | 2.0 | 932 | 0.2517 | 11.2714 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
| 5f6ef90f6a9b9122cdf53a2b96d1ec9a |
johko/capdec_005 | johko | null | 3 | 0 | null | 1 | image-to-text | false | false | false | apache-2.0 | ['en'] | ['MS-COCO', 'Flickr30k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Image Captioning'] | false | true | true | 1,348 | false |
# CapDec - NoiseLevel: 0.005
## Model Description
These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf).
Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding.
In their words:
*Specifically, we assume that the visual embedding corresponding to a text embedding
lies somewhere within a ball of small radius around the text embedding (see Fig. 1).
We would like all text embeddings in this ball to decode to the same caption,which should
also correspond to the visual content mapped to this ball. We implement this intuition by
adding zero-mean Gaussian noise of STD to the text embedding before decoding it.*
The "Noise Level" of 0.005 is equivalent to the Noise Variance which is the square of the STD.
The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
## Datasets
The authors trained the model on MS-COCO and Flickr30k datasets.
## Performance
The authors don't explicitly report the performance for this NoiseLevel but it can be estimated from the following figure from the original paper:
![](capdec_performance.png) | ada4f169b45adce9daa677fc45cee3c8 |
sd-concepts-library/tangles | sd-concepts-library | null | 13 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,249 | false | ### Tangles on Stable Diffusion
This is the `<cora-tangle>` concept taught to Stable Diffusion via Textual Inversion. The concept is based on my daughter drawing random color fields with her hands or a paint brush and then me filling in the rest of the space with geometric doodles.
You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook.
Sample results below and more results [here](https://imgur.com/user/ritaleonova/posts.html).
Seed: Lush alien aquarium in the style `<cora-tangle>`
![<cora-tangle> nightime cityscape](https://i.imgur.com/ivMLbQk.jpeg)
Seed: Nighttime city scape in the style `<cora-tangle>`
![<cora-tangle> nightime cityscape](https://i.imgur.com/Q9XP2eb.png)
Seed: Hand holding paintbrush in the style `<cora-tangle>`
![<cora-tangle> nightime cityscape](https://i.imgur.com/7kHm7xm.png)
Seed: Fox in the style `<cora-tangle>`
![<cora-tangle> fox](https://i.imgur.com/Eu2WoAE.png)
Seed: Baby bird in the style `<cora-tangle>`
![<cora-tangle> bird](https://i.imgur.com/bza6jwp.png)
Here were the seed images for the style:
![<cora-tangle> 0](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/1.jpeg)
![<cora-tangle> 1](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/5.jpeg)
![<cora-tangle> 2](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/0.jpeg)
![<cora-tangle> 3](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/4.jpeg)
![<cora-tangle> 4](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/2.jpeg)
![<cora-tangle> 5](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/3.jpeg)
![<cora-tangle> 6](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/6.jpeg)
![<cora-tangle> 7](https://huggingface.co/sd-concepts-library/tangles/resolve/main/concept_images/7.jpeg)
You can train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). | 60be6330358aba5c9a0f3385458bdf91 |
flamesbob/akaramModel | flamesbob | null | 7 | 0 | null | 1 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,476 | false | akrammodel_10000 was trained on a blend of (v1-5-pruned[a9263745] and anythingv3[6569e224] weighted sum M0.5) for 10000 steps, classification "artstyle", instance m_akram
## example
<img alt="Showcase" src="https://huggingface.co/flamesbob/akaramModel/resolve/main/00035-1277575582-m_akram%2C((bes___.png"/>
<img alt="Showcase" src="https://huggingface.co/flamesbob/akaramModel/resolve/main/00036-1277575585-m_akram%2C((bes___.png"/>
<img alt="Showcase" src="https://huggingface.co/flamesbob/akaramModel/resolve/main/00037-1277575584-m_akram,((bes___.png"/>
<img alt="Showcase" src="https://huggingface.co/flamesbob/akaramModel/resolve/main/00039-3151256283-m_akram,((bes___.png"/>
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here | c0de948c5fda33ddc08acc03c9b7fabd |
google/multiberts-seed_1-step_600k | google | bert | 8 | 12 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_600k'] | false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 600k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 600k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_600k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_600k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| d4b1d0ecd6a7d914c4b8f4f706d8ef53 |
DOOGLAK/Tagged_One_50v2_NER_Model_3Epochs_AUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['tagged_one50v2_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,562 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_50v2_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6200
- Precision: 0.125
- Recall: 0.0007
- F1: 0.0015
- Accuracy: 0.7835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 18 | 0.7424 | 0.0 | 0.0 | 0.0 | 0.7776 |
| No log | 2.0 | 36 | 0.6479 | 0.0909 | 0.0002 | 0.0005 | 0.7819 |
| No log | 3.0 | 54 | 0.6200 | 0.125 | 0.0007 | 0.0015 | 0.7835 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| 5b9623a6aee88cfe6a1afc4ab864d2d6 |
shields/whisper-small-hindi | shields | whisper | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['hi'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,490 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hindi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3323
- Wer: 31.3299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1464 | 1.22 | 1000 | 0.3049 | 36.8492 |
| 0.0636 | 2.44 | 2000 | 0.2910 | 32.8875 |
| 0.0191 | 3.67 | 3000 | 0.3117 | 31.7828 |
| 0.0096 | 4.89 | 4000 | 0.3323 | 31.3299 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 7fd339ee9d30b480a98433adc9faa522 |
wmingch/distilbert-base-uncased-finetuned-emotion | wmingch | distilbert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8164 | 1.0 | 250 | 0.3181 | 0.9015 | 0.8984 |
| 0.2434 | 2.0 | 500 | 0.2174 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 38e8bc5b6f2c8a74c7011926bfdeab9b |
elliotthwang/t5-small-finetuned-xlsum-chinese-tradition | elliotthwang | t5 | 14 | 42 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['xlsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,932 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xlsum-chinese-tradition
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2061
- Rouge1: 0.8887
- Rouge2: 0.0671
- Rougel: 0.889
- Rougelsum: 0.8838
- Gen Len: 6.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.4231 | 1.0 | 2336 | 1.2586 | 0.711 | 0.0528 | 0.7029 | 0.7053 | 7.3368 |
| 1.378 | 2.0 | 4672 | 1.2281 | 0.9688 | 0.05 | 0.9574 | 0.9656 | 7.0392 |
| 1.3567 | 3.0 | 7008 | 1.2182 | 0.9534 | 0.1035 | 0.9531 | 0.9472 | 6.7437 |
| 1.3339 | 4.0 | 9344 | 1.2096 | 0.9969 | 0.0814 | 0.9969 | 0.9938 | 7.4503 |
| 1.3537 | 5.0 | 11680 | 1.2072 | 0.8429 | 0.0742 | 0.8372 | 0.838 | 6.8049 |
| 1.3351 | 6.0 | 14016 | 1.2061 | 0.8887 | 0.0671 | 0.889 | 0.8838 | 6.8779 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 7a41690199c61b98ec87bf8a5fcf2939 |
yhavinga/ul2-large-dutch-english | yhavinga | t5 | 18 | 19 | transformers | 0 | text2text-generation | true | false | true | apache-2.0 | ['nl', 'en', 'multilingual'] | ['yhavinga/mc4_nl_cleaned', 'yhavinga/nedd_wiki_news'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq'] | false | true | true | 10,326 | false |
# ul2-large-dutch-english for Dutch and English
Pretrained T5 model on Dutch and English using a UL2 (Mixture-of-Denoisers) objective.
The T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on
a specific downstream task to be useful in practice.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
`ul2-large-dutch-english` T5 is a transformers model pretrained on a very large corpus of
Dutch and English data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way
(which is why it can use lots of publicly available data) with an automatic process to generate
inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in the feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off during pre-training. Dropout should be re-enabled during fine-tuning
- Pre-trained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training
paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where
the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers
that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of
three denoising tasks:
1. R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective;
2. X-denoising (or extreme span corruption); and
3. S-denoising (or sequential PrefixLM).
During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training
denoising task. During the pre-training, a paradigm token is inserted to the input
(`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand.
Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream
fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task,
like text classification, unlike the Google's original T5 model.
**Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision
so fine-tune them with full fp32 precision. Fine-tuning with Flax in bf16 - `model.to_bf16()` - is possible
if you set the mask correctly to exclude layernorm and embedding layers. Also note that the T5x pre-training
and fine-tuning configs set `z_loss` to 1e-4, which is used to keep the loss scale from underflowing.
You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token
of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts.
For general language understanding fine-tuning tasks, you could use the `[NLU]` token.
For GPT-style causal language generation, you could use the `[S2S]` token.
The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language
generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-large-dutch-english", use_fast=False)
model = T5ForConditionalGeneration.from_pretrained("yhavinga/ul2-large-dutch-english")
```
and in Flax:
```python
from transformers import T5Tokenizer, FlaxT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-large-dutch-english", use_fast=False)
model = FlaxT5ForConditionalGeneration.from_pretrained("yhavinga/ul2-large-dutch-english")
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.
Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
The `ul2-large-dutch-english` T5 model was pre-trained simultaneously on a combination of several datasets,
including the `full_en_nl` config of the "mc4_nl_cleaned" dataset, which is a cleaned version of Common Crawl's web
crawl corpus, Dutch books, the Dutch subset of Wikipedia (2022-03-20), the English subset of Wikipedia (2022-03-01),
and a subset of "mc4_nl_cleaned"
containing only texts from Dutch and Belgian newspapers. This last dataset is oversampled to bias the model
towards descriptions of events in the Netherlands and Belgium.
## Training procedure
### Preprocessing
The ul2-large-dutch-english T5 model uses a SentencePiece unigram tokenizer with a vocabulary of 32,000 tokens.
The tokenizer includes the special tokens `<pad>`, `</s>`, `<unk>`, known from the original T5 paper,
`[NLU]`, `[NLG]` and `[S2S]` for the MoD pre-training, and `<n>` for newline.
During pre-training with the UL2 objective, input and output sequences consist of 512 consecutive tokens.
The tokenizer does not lowercase texts and is therefore case-sensitive; it distinguises
between `dutch` and `Dutch`.
Additionally, 100+28 extra tokens were added for pre-training tasks, resulting in a total of 32,128 tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/),
for 1000000 steps with a batch size of 64
(in total 32 B tokens).
The optimizer used was AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2,
and then an inverse square root decay (exponential decay) of the learning rate after.
The model was trained with Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) with help
from [Stephenn Fernandes](https://huggingface.co/StephennFernandes) to get started writing task definitions that wrap
HF datasets.
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and
slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2 by the authors
of the Finnish ul2 models. Used UL2 objective code is available in the repository
[Finnish-NLP/ul2-base-nl36-finnish](https://huggingface.co/Finnish-NLP/ul2-base-nl36-finnish) in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper
but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5)
and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
### Model list
Models in this series:
| | ul2-base-dutch-english | ul2-large-dutch-english | ul2-small-dutch-english |
|:---------------------|:-------------------------|:--------------------------|:--------------------------|
| model_type | t5 | t5 | t5 |
| _pipeline_tag | text2text-generation | text2text-generation | text2text-generation |
| d_model | 768 | 1024 | 512 |
| d_ff | 2048 | 2816 | 1024 |
| num_heads | 12 | 16 | 6 |
| d_kv | 64 | 64 | 64 |
| num_layers | 12 | 24 | 8 |
| num_decoder_layers | 12 | 24 | 8 |
| feed_forward_proj | gated-gelu | gated-gelu | gated-gelu |
| dense_act_fn | gelu_new | gelu_new | gelu_new |
| vocab_size | 32128 | 32128 | 32128 |
| tie_word_embeddings | 0 | 0 | 0 |
| torch_dtype | float32 | float32 | float32 |
| _gin_batch_size | 128 | 64 | 128 |
| _gin_z_loss | 0.0001 | 0.0001 | 0.0001 |
| _gin_t5_config_dtype | 'bfloat16' | 'bfloat16' | 'bfloat16' |
## Evaluation results
See the evaluation section in the interactive [Pre-training Dutch T5 Models](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models) blog.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective and associated task definitions.
Thanks to [Stephenn Fernandes](https://huggingface.co/StephennFernandes) for helping me get started with the t5x framework.
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
| 11af792cb97dd975bf0c26f2b6c4c550 |
espnet/kan-bayashi_ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave | espnet | null | 27 | 13 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['en'] | ['ljspeech'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,865 | false | ## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5443814/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 07142115fdfb76f614ae21ac35d98fbe |
mvicentel/ddpm-butterflies-128 | mvicentel | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,231 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mvicentel/ddpm-butterflies-128/tensorboard?#scalars)
| 07e318f1bdf0841070942e0de111a8df |
Atharvgarg/distilbart-xsum-6-6-finetuned-bbc-news-on-extractive | Atharvgarg | bart | 17 | 5 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarisation', 'generated_from_trainer'] | true | true | true | 1,683 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-6-6-finetuned-bbc-news-on-extractive
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5869
- Rouge1: 39.4885
- Rouge2: 31.7487
- Rougel: 31.9013
- Rougelsum: 34.0825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.4649 | 1.0 | 445 | 1.5047 | 39.1053 | 31.6651 | 32.3242 | 33.9332 |
| 1.2224 | 2.0 | 890 | 1.4986 | 39.4115 | 31.7894 | 32.1057 | 34.0454 |
| 1.0099 | 3.0 | 1335 | 1.5322 | 39.5936 | 31.9984 | 32.2283 | 34.1798 |
| 0.8687 | 4.0 | 1780 | 1.5869 | 39.4885 | 31.7487 | 31.9013 | 34.0825 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 480baed95310d03e3da1ae2970392975 |
Esquivies/cat-toy | Esquivies | null | 43 | 4 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,200 | false | ### Cat toy on Stable Diffusion via Dreambooth
#### model by Esquivies
This your the Stable Diffusion model fine-tuned the Cat toy concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of brat pitt**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
![image 0](https://huggingface.co/Esquivies/cat-toy/resolve/main/concept_images/1.jpeg)
![image 1](https://huggingface.co/Esquivies/cat-toy/resolve/main/concept_images/2.jpeg)
![image 2](https://huggingface.co/Esquivies/cat-toy/resolve/main/concept_images/3.jpeg)
![image 3](https://huggingface.co/Esquivies/cat-toy/resolve/main/concept_images/0.jpeg)
| 34b8590d8cabdaad541296e4da493fb2 |
MultiBertGunjanPatrick/multiberts-seed-4-500k | MultiBertGunjanPatrick | bert | 7 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | true | true | 6,483 | false | # MultiBERTs Seed 4 Checkpoint 500k (uncased)
Seed 4 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-500k')
model = BertModel.from_pretrained("multiberts-seed-4-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| d65205c87793c27312674892298a16db |
MultiBertGunjanPatrick/multiberts-seed-4-120k | MultiBertGunjanPatrick | bert | 7 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | true | true | 6,483 | false | # MultiBERTs Seed 4 Checkpoint 120k (uncased)
Seed 4 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-120k')
model = BertModel.from_pretrained("multiberts-seed-4-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| a4b521c8ab0bd08ac934bbab14ea294d |
DeepaKrish/distilbert-base-uncased-finetuned | DeepaKrish | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,541 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1137
- Accuracy: 0.9733
- F1: 0.9743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0868 | 1.0 | 1370 | 0.1098 | 0.9729 | 0.9738 |
| 0.0598 | 2.0 | 2740 | 0.1137 | 0.9733 | 0.9743 |
| 0.0383 | 3.0 | 4110 | 0.1604 | 0.9721 | 0.9731 |
| 0.0257 | 4.0 | 5480 | 0.1671 | 0.9717 | 0.9729 |
| 0.016 | 5.0 | 6850 | 0.1904 | 0.9709 | 0.9720 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 2.5.1
- Tokenizers 0.10.3
| 856e1a78722a0a34779d481e2246d490 |
jonatasgrosman/exp_w2v2t_en_vp-fr_s691 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 475 | false | # exp_w2v2t_en_vp-fr_s691
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| f3e051fc321f5116fd9a53b5e1b56466 |
c17hawke/bert-fine-tuned-cola_2 | c17hawke | bert | 8 | 1 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,211 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola_2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3078
- Validation Loss: 0.4072
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4976 | 0.4236 | 0 |
| 0.3078 | 0.4072 | 1 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 174fc4663d1cc555c03b7c92714b4efe |
kumarprashant556/wav2vec2-large-xls-r-300m-tr-colab | kumarprashant556 | wav2vec2 | 22 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,246 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7540
- eval_wer: 0.7217
- eval_runtime: 197.7117
- eval_samples_per_second: 8.33
- eval_steps_per_second: 1.042
- epoch: 3.67
- step: 400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| 5f233c1b9993f05d7e792f591448824a |
elopezlopez/Bio_ClinicalBERT_fold_10_binary_v1 | elopezlopez | bert | 16 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,668 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_10_binary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5504
- F1: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3803 | 0.8103 |
| 0.4005 | 2.0 | 576 | 0.4769 | 0.8070 |
| 0.4005 | 3.0 | 864 | 0.5258 | 0.7955 |
| 0.1889 | 4.0 | 1152 | 0.7423 | 0.8153 |
| 0.1889 | 5.0 | 1440 | 1.1246 | 0.8012 |
| 0.0703 | 6.0 | 1728 | 1.1325 | 0.8039 |
| 0.0246 | 7.0 | 2016 | 1.2192 | 0.8196 |
| 0.0246 | 8.0 | 2304 | 1.3645 | 0.8050 |
| 0.0192 | 9.0 | 2592 | 1.4029 | 0.8087 |
| 0.0192 | 10.0 | 2880 | 1.3714 | 0.8117 |
| 0.0107 | 11.0 | 3168 | 1.4673 | 0.8092 |
| 0.0107 | 12.0 | 3456 | 1.3941 | 0.8199 |
| 0.0084 | 13.0 | 3744 | 1.4350 | 0.8126 |
| 0.0083 | 14.0 | 4032 | 1.4428 | 0.8162 |
| 0.0083 | 15.0 | 4320 | 1.2892 | 0.8263 |
| 0.0119 | 16.0 | 4608 | 1.4238 | 0.8222 |
| 0.0119 | 17.0 | 4896 | 1.4961 | 0.8174 |
| 0.0046 | 18.0 | 5184 | 1.5010 | 0.8107 |
| 0.0046 | 19.0 | 5472 | 1.4876 | 0.8215 |
| 0.0036 | 20.0 | 5760 | 1.5080 | 0.8180 |
| 0.0031 | 21.0 | 6048 | 1.5317 | 0.8261 |
| 0.0031 | 22.0 | 6336 | 1.5103 | 0.8215 |
| 0.0005 | 23.0 | 6624 | 1.5255 | 0.8197 |
| 0.0005 | 24.0 | 6912 | 1.5578 | 0.8257 |
| 0.0001 | 25.0 | 7200 | 1.5504 | 0.8243 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| c4dbd4ac0b0aab12245b3ff43a2b98ab |
heziyevv/aze-bert-tokenizer-middle | heziyevv | null | 6 | 0 | null | 0 | null | false | false | false | mit | ['az'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['wikipedia', 'books', 'social-media'] | false | true | true | 602 | false |
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Farid Haziyev
- **Model type:** Tokenizer
- **Language(s) (NLP):** Azerbaijani
- **License:** MIT
- **Finetuned from model [optional]:** bert-based-uncased
# Uses
Can be used in any project intended for the purpose of improving Azerbaijani language models
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("heziyevv/aze-bert-tokenizer-middle")
``` | 1e3a624d32a2630edbcf79265c5e7adb |
YKXBCi/vit-base-patch16-224-in21k-aidSat | YKXBCi | vit | 15 | 3 | transformers | 0 | image-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,448 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YKXBCi/vit-base-patch16-224-in21k-aidSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4026
- Train Accuracy: 0.9981
- Train Top-3-accuracy: 0.9998
- Validation Loss: 0.4715
- Validation Accuracy: 0.9796
- Validation Top-3-accuracy: 0.9980
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1325, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 2.3544 | 0.7383 | 0.8687 | 1.5415 | 0.9266 | 0.9857 | 0 |
| 1.1313 | 0.9522 | 0.9942 | 0.8788 | 0.9613 | 0.9966 | 1 |
| 0.6741 | 0.9841 | 0.9985 | 0.6268 | 0.9640 | 0.9986 | 2 |
| 0.4785 | 0.9953 | 0.9995 | 0.5058 | 0.9755 | 0.9980 | 3 |
| 0.4026 | 0.9981 | 0.9998 | 0.4715 | 0.9796 | 0.9980 | 4 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 3e4149da9b538d9d7143db6ce294da05 |
theojolliffe/bart-model2-1510-e1 | theojolliffe | bart | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-model2-1510-e1
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 409 | 0.3606 | 62.6578 | 56.1468 | 60.4978 | 61.0214 | 20.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 2059d8bd221ac6e47c46c3f58e713587 |
silviacamplani/distilbert-uncase-direct-finetuning-ai-ner | silviacamplani | distilbert | 8 | 2 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,896 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-uncase-direct-finetuning-ai-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6021
- Validation Loss: 1.6163
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2752 | 3.0320 | 0 |
| 2.7791 | 2.5293 | 1 |
| 2.2674 | 2.0340 | 2 |
| 1.8952 | 1.8222 | 3 |
| 1.7933 | 1.7669 | 4 |
| 1.7352 | 1.7158 | 5 |
| 1.6868 | 1.6706 | 6 |
| 1.6242 | 1.6412 | 7 |
| 1.5899 | 1.6234 | 8 |
| 1.6021 | 1.6163 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| c1da6f9f0febac9a23df83a7dbc8cd08 |
techiaith/fullstop-welsh-punctuation-prediction | techiaith | xlm-roberta | 8 | 11 | transformers | 1 | token-classification | true | false | false | mit | ['cy'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['punctuation prediction', 'punctuation'] | false | true | true | 2,773 | false |
This model predicts the punctuation of Welsh language texts. It has been created to restore punctuation of transcribed from speech recognition models such as https://huggingface.co/techiaith/wav2vec2-xlsr-ft-cy. The model restores the following punctuation markers: "." "," "?" "-" ":"
The model was trained on Welsh texts extracted from the Welsh Parliament / Senedd Record of Proceedings between 1999-2010 and 2016 to the present day. Please note that the training data consists of originally spoken and translated political speeches. Therefore the model might perform differently on texts from other domains.
Based on the work of https://github.com/oliverguhr/fullstop-deep-punctuation-prediction and [softcatala/fullstop-catalan-punctuation-prediction](https://huggingface.co/softcatala/fullstop-catalan-punctuation-prediction)
## Install
To get started install the deepmultilingualpunctuation package from [pypi](https://pypi.org/project/deepmultilingualpunctuation/):
```bash
pip install deepmultilingualpunctuation
```
### Restore Punctuation
```python
from deepmultilingualpunctuation import PunctuationModel
model = PunctuationModel("techiaith/fullstop-welsh-punctuation-prediction")
text = "A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn ne ddwyrain Cymru"
result = model.restore_punctuation(text)
print(result)
```
**output**
```
[
{
"entity_group": "LABEL_0",
"score": 0.9999812841415405,
"word": "A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn",
"start": 0,
"end": 58
},
{
"entity_group": "LABEL_4",
"score": 0.9787278771400452,
"word": "ne",
"start": 59,
"end": 61
},
{
"entity_group": "LABEL_0",
"score": 0.9999902248382568,
"word": "ddwyrain",
"start": 62,
"end": 70
},
{
"entity_group": "LABEL_3",
"score": 0.9484745860099792,
"word": "Cymru",
"start": 71,
"end": 76
}
]
```
> A yw'r gweinidog yn cytuno bod angen gwell gwasanaethau yn ne-ddwyrain Cymru?
## Results
The model achieves the following F1 scores for the different punctuation markers:
| Label | Precision | Recall | f1-score | Support |
| ------------- | ----- | ----- | ----- | ----- |
| 0 | 0.99 | 0.99 | 0.99 | 5053572 |
| . | 0.89 | 0.88 | 0.88 | 224920 |
| , | 0.83 | 0.82 | 0.82 | 363886 |
| ? | 0.91 | 0.87 | 0.89 | 20762 |
| - | 0.95 | 0.94 | 0.94 | 13161 |
| : | 0.92 | 0.89 | 0.90 | 5274 |
| | | | | |
| accuracy | | | 0.98 | 11012581 |
| macro average | 0.92 | 0.90 | 0.91 | 11012581 |
| weighted average | 0.98 | 0.98 | 0.98 | 11012581 |
##
| be11b6b4f5c74b26789ff371f8badfae |
DOOGLAK/Article_100v3_NER_Model_3Epochs_AUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['article100v3_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_100v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3754
- Precision: 0.3300
- Recall: 0.3119
- F1: 0.3207
- Accuracy: 0.8602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 23 | 0.5728 | 0.1855 | 0.0562 | 0.0862 | 0.7983 |
| No log | 2.0 | 46 | 0.4070 | 0.2869 | 0.2315 | 0.2562 | 0.8445 |
| No log | 3.0 | 69 | 0.3754 | 0.3300 | 0.3119 | 0.3207 | 0.8602 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| e93a0e8b531f5c18d4f3b0a98e390784 |
hakurei/waifu-diffusion-v1-4 | hakurei | null | 12 | 0 | null | 543 | text-to-image | false | false | false | creativeml-openrail-m | ['en'] | null | null | 20 | 9 | 0 | 10 | 20 | 20 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,297 | false |
![image](https://user-images.githubusercontent.com/26317155/210155933-db3a5f1a-1ec3-4777-915c-6deff2841ce9.png)
<sub>masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck</sub>
# Waifu Diffusion v1.4
Waifu Diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
- [Waifu Diffusion 1.4 Anime Epoch 1](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e1.ckpt): A test model made to properly ensure that the training setup works.
- [Waifu Diffusion 1.4 Anime Inference Config](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e1.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by Stability AI and NovelAI.
- [Haru](https://github.com/harubaru)
- [Salt](https://github.com/sALTaccount/)
- [Cafe](https://twitter.com/cafeai_labs)
In order to reach us, you can join our [Discord server](https://discord.gg/touhouai).
[![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai) | 8776a58d1bf36ef326aa44eed8fe5a82 |
ViktorDo/DistilBERT-WIKI_Growth_Form_Finetuned | ViktorDo | distilbert | 12 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,315 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-WIKI_Growth_Form_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2454 | 1.0 | 2320 | 0.2530 |
| 0.1875 | 2.0 | 4640 | 0.2578 |
| 0.1386 | 3.0 | 6960 | 0.2666 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 1ae7dc89635849566a0241f0668bc35d |
Sparkette/elliesparks | Sparkette | null | 18 | 36 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 615 | false | ### EllieSparks Dreambooth model trained by Sparkette with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
| 523a40ae2ffc2e73c735b8beec11f9a0 |
Helsinki-NLP/opus-mt-fi-NORWAY | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,044 | false |
### opus-mt-fi-NORWAY
* source languages: fi
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [fi-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.no | 34.2 | 0.545 |
| 393687c99e9064b26cfba29a7d686064 |
mselbach/distilbart-rehadat | mselbach | bart | 12 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 954 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-summarization
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cpu
- Datasets 1.17.0
- Tokenizers 0.11.0
| 5b619a7b1005f002189aabc5aa536192 |
ameerazam08/wav2vec2-xlsr-greek-speech-fluency | ameerazam08 | wav2vec2 | 18 | 2 | transformers | 1 | null | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,377 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-greek-speech-fluency
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5886
- Accuracy: 0.6800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6625 | 0.8 | 10 | 0.5886 | 0.6800 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 2.4.1.dev0
- Tokenizers 0.12.1
| d43e04abc32dc39fe0fe232164745157 |
Helsinki-NLP/opus-mt-tw-es | Helsinki-NLP | marian | 10 | 12 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-tw-es
* source languages: tw
* target languages: es
* OPUS readme: [tw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.es | 25.9 | 0.441 |
| 86ef625c90bfcae9bf492b47f9440100 |
gokuls/distilbert_add_GLUE_Experiment_logit_kd_wnli_192 | gokuls | distilbert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,814 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_wnli_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3434
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3489 | 1.0 | 3 | 0.3454 | 0.5634 |
| 0.348 | 2.0 | 6 | 0.3465 | 0.5634 |
| 0.3471 | 3.0 | 9 | 0.3436 | 0.5634 |
| 0.3486 | 4.0 | 12 | 0.3434 | 0.5634 |
| 0.3485 | 5.0 | 15 | 0.3448 | 0.5634 |
| 0.3476 | 6.0 | 18 | 0.3471 | 0.5634 |
| 0.3481 | 7.0 | 21 | 0.3463 | 0.5634 |
| 0.3473 | 8.0 | 24 | 0.3451 | 0.5634 |
| 0.3491 | 9.0 | 27 | 0.3440 | 0.5634 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 0eb91d4b851934aef041a864fda6d62e |
fathyshalab/all-roberta-large-v1-auto_and_commute-5-16-5 | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,521 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 21c923d7ca5f6f0b33f155949ad3e80d |
YKXBCi/vit-base-patch16-224-in21k-euroSat | YKXBCi | vit | 41 | 4 | transformers | 0 | image-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,184 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YKXBCi/vit-base-patch16-224-in21k-euroSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0495
- Train Accuracy: 0.9948
- Train Top-3-accuracy: 0.9999
- Validation Loss: 0.0782
- Validation Accuracy: 0.9839
- Validation Top-3-accuracy: 1.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3585, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.4593 | 0.9478 | 0.9912 | 0.1558 | 0.9809 | 0.9995 | 0 |
| 0.1008 | 0.9876 | 0.9997 | 0.0855 | 0.9856 | 1.0 | 1 |
| 0.0495 | 0.9948 | 0.9999 | 0.0782 | 0.9839 | 1.0 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 6697f243e972f11eb14781e747475146 |
peterhsu/tf-distilbert-base-uncased-finetuned-imdb | peterhsu | distilbert | 10 | 5 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,642 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# peterhsu/tf-distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5691
- Validation Loss: 2.4661
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8546 | 2.6095 | 0 |
| 2.6594 | 2.5243 | 1 |
| 2.5691 | 2.4661 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| 28ee76ce48911fd02bea39fbba84e037 |
joe5campbell/BERT_Tweet_Sentiment_TEST | joe5campbell | bert | 4 | 4 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,388 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_TEST
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5541
- Train Accuracy: 0.9375
- Validation Loss: 0.6546
- Validation Accuracy: 1.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6902 | 0.625 | 0.6677 | 1.0 | 0 |
| 0.5541 | 0.9375 | 0.6546 | 1.0 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
| ae0d7595f7d5c34e8795411d0e3b0774 |
google/multiberts-seed_4-step_160k | google | bert | 8 | 12 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_4', 'multiberts-seed_4-step_160k'] | false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 160k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 160k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_160k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_160k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| 6b448a8950f48c3e6535db1e33c68d30 |
madatnlp/codet5-kormath | madatnlp | t5 | 9 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,909 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# codet5-kormath
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1215
- Validation Loss: 0.5475
- Epoch: 43
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.8077 | 3.6916 | 0 |
| 3.3252 | 2.3874 | 1 |
| 2.2389 | 1.7377 | 2 |
| 1.6852 | 1.3997 | 3 |
| 1.3617 | 1.1785 | 4 |
| 1.1875 | 1.0796 | 5 |
| 1.0887 | 1.0150 | 6 |
| 1.0234 | 0.9669 | 7 |
| 0.9709 | 0.9350 | 8 |
| 0.9292 | 0.9016 | 9 |
| 0.8982 | 0.8763 | 10 |
| 0.8653 | 0.8565 | 11 |
| 0.8389 | 0.8378 | 12 |
| 0.8094 | 0.8241 | 13 |
| 0.7828 | 0.8078 | 14 |
| 0.7631 | 0.7945 | 15 |
| 0.7386 | 0.7856 | 16 |
| 0.7209 | 0.7749 | 17 |
| 0.7039 | 0.7703 | 18 |
| 0.6858 | 0.7708 | 19 |
| 0.6718 | 0.7705 | 20 |
| 0.6544 | 0.7710 | 21 |
| 0.6477 | 0.7728 | 22 |
| 0.6289 | 0.7590 | 23 |
| 0.6102 | 0.7616 | 24 |
| 0.5966 | 0.7644 | 25 |
| 0.5773 | 0.7559 | 26 |
| 0.5597 | 0.7609 | 27 |
| 0.5384 | 0.7823 | 28 |
| 0.5256 | 0.7263 | 29 |
| 0.4780 | 0.6671 | 30 |
| 0.4328 | 0.6591 | 31 |
| 0.3844 | 0.5918 | 32 |
| 0.3366 | 0.5852 | 33 |
| 0.3036 | 0.5633 | 34 |
| 0.2746 | 0.5566 | 35 |
| 0.2486 | 0.5644 | 36 |
| 0.2286 | 0.5293 | 37 |
| 0.2023 | 0.5256 | 38 |
| 0.1870 | 0.5276 | 39 |
| 0.1638 | 0.5272 | 40 |
| 0.1479 | 0.5642 | 41 |
| 0.1386 | 0.5482 | 42 |
| 0.1215 | 0.5475 | 43 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
| 763c60522372999f20265f2ddb405201 |
jonatasgrosman/exp_w2v2t_et_hubert_s118 | jonatasgrosman | hubert | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['et'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'et'] | false | true | true | 452 | false | # exp_w2v2t_et_hubert_s118
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| fadc424d79c00447bb0f9e83440a066a |
spacy/hr_core_news_sm | spacy | null | 32 | 2 | spacy | 0 | token-classification | false | false | false | cc-by-sa-4.0 | ['hr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 42,062 | false | ### Details: https://spacy.io/models/hr#hr_core_news_sm
Croatian pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `hr_core_news_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Training corpus hr500k 1.0](http://hdl.handle.net/11356/1183) (Ljubešić, Nikola ; Agić, Željko ; Klubička, Filip ; Batanović, Vuk and Erjavec, Tomaž) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (1518 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `Agcfpay`, `Agcfpdy`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmpny`, `Agcmsany`, `Agcmsay`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnpay`, `Agcnpdy`, `Agcnpgy`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsin`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpfsvy`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmsan`, `Agpmsann`, `Agpmsany`, `Agpmsay`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsln`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpmsvy`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsln`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpdy`, `Agsfpgy`, `Agsfpiy`, `Agsfply`, `Agsfpny`, `Agsfsay`, `Agsfsdy`, `Agsfsgy`, `Agsfsiy`, `Agsfsly`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmpiy`, `Agsmply`, `Agsmpny`, `Agsmsany`, `Agsmsayn`, `Agsmsayy`, `Agsmsdy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnply`, `Agsnpny`, `Agsnsay`, `Agsnsdy`, `Agsnsgy`, `Agsnsiy`, `Agsnsly`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpdy`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsany`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgn`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpdy`, `Appnpgy`, `Appnpiy`, `Appnply`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfpiy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmply`, `Aspmpny`, `Aspmsayn`, `Aspmsayy`, `Aspmsdn`, `Aspmsdy`, `Aspmsgn`, `Aspmsgy`, `Aspmsiy`, `Aspmsln`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnpgy`, `Aspnpny`, `Aspnsay`, `Aspnsgn`, `Aspnsgy`, `Aspnsln`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--g`, `Mlc--i`, `Mlc--l`, `Mlcf-a`, `Mlcf-d`, `Mlcf-g`, `Mlcf-n`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-g`, `Mlcm-l`, `Mlcm-n`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsg`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonpn`, `Mlonsa`, `Mlonsd`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-g`, `Mlsm-l`, `Mlsm-n`, `Mlsmpn`, `Mlsn-n`, `Mrc`, `Mro`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfpv`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncfsv`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmpv`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Ncnsv`, `Npfpa`, `Npfpg`, `Npfpl`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpl`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npmsv`, `Npnpg`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpi`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npg`, `Pd-npi`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsd`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-i`, `Pi3m-n`, `Pi3n-a`, `Pi3n-d`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sg`, `Pp1-si`, `Pp1-sl`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pl`, `Pp2-pn`, `Pp2-sa`, `Pp2-sd`, `Pp2-sg`, `Pp2-sl`, `Pp2-sn`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsi`, `Pp3nsn`, `Pq-fpa`, `Pq-fpn`, `Pq-fsa`, `Pq-fsi`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3m-d`, `Pq3m-n`, `Pq3n-a`, `Pq3n-l`, `Pq3n-n`, `Ps1fpa`, `Ps1fpg`, `Ps1fpl`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsi`, `Ps1fsl`, `Ps1fsn`, _(truncated: full list in pipeline meta)_ |
| **`morphologizer`** | `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|POS=ADP`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Degree=Pos\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=PART`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=CCONJ`, `Case=Gen\|POS=ADP`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Int,Rel`, `Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=X`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=NOUN`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|POS=ADP`, `Degree=Sup\|POS=ADV`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=ADV\|Tense=Pres\|VerbForm=Conv`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `NumType=Mult\|POS=NUM`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=ADV\|Tense=Past\|VerbForm=Conv`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Degree=Pos\|POS=ADV\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Gen\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Mult\|POS=SYM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl:pv`, `fixed`, `flat`, `flat:foreign`, `goeswith`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `DERIV_PER`, `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.89 |
| `TOKEN_P` | 97.28 |
| `TOKEN_R` | 98.71 |
| `TOKEN_F` | 97.99 |
| `TAG_ACC` | 90.03 |
| `POS_ACC` | 96.74 |
| `MORPH_ACC` | 90.62 |
| `MORPH_MICRO_P` | 94.88 |
| `MORPH_MICRO_R` | 94.34 |
| `MORPH_MICRO_F` | 94.61 |
| `SENTS_P` | 94.95 |
| `SENTS_R` | 94.72 |
| `SENTS_F` | 94.84 |
| `DEP_UAS` | 84.47 |
| `DEP_LAS` | 77.65 |
| `LEMMA_ACC` | 91.94 |
| `ENTS_P` | 77.38 |
| `ENTS_R` | 75.45 |
| `ENTS_F` | 76.40 | | afa29e1e8c224c3fe8402ccaeeec24b5 |
Aotsuyu/HogwartLora | Aotsuyu | null | 29 | 0 | null | 0 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['anime'] | false | true | true | 6,713 | false | # Hogwart uniforms LoRA
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/0.png" width="800" height="512">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/0.png)
A LoRA for Hogwart uniforms, since Hogwarts Legacy renewed people's interest in the franchise.
# What to get
I am including all epochs, but I've personally had the best results with the 2nd to 4th epochs, which I am renaming to *hogsks-weak*, *hogsks-mid* and *hogsks-hard*.
Most models seem to have an idea as to how the uniform looks like so they only need a small push - that's why I suggest starting with ***hogsks-mid***.
Only go for higher epoch if you're sure that's what you need.
# Invoking
I made the token **hogsks**. I also tried to tag each of the images in the dataset with the proper house, so you might have *some* results prompting for ravenclaw, gryffindor, slytherin and hufflepuff, but it's not super reliable.<br>
For those using the native implementation of LoRA, remember to also activate it!<br>
What I propose as a base prompting template:<br>
`hogsks, hogwarts school uniform, black robe, gray vest, slytherin, green tie`<br>
***Color*** emblem and ***color*** scarf also seem to work reasonably well.
Adjust the house and colors for the desired house, obviously.
This image is made with a very basic prompt:
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/1.png" width="512" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/1.png)
<details>
<summary>Prompt</summary>
<pre>
best quality, 1girl, Hogsks, hogwarts school uniform, black cape, gray vest, slytherin, green tie,
Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit, fewer digits, (extra arms:1.2), bad hands, artist name
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3963964880, Size: 512x762, Model: anything-v4.5-pruned, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000003(c945fe615333), AddNet Weight A 1: 0.85, AddNet Weight B 1: 0.85, Hires upscale: 2, Hires steps: 15, Hires upscaler: 4x-AnimeSharp</pre>
</details>
<br><br>
# Previews
All the previews have prompts included, so read that!
The model I used for Hololive [can be found here](https://huggingface.co/Aotsuyu/Qcha/blob/main/Qcha-hllv1.safetensors). It's a merge I did.
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/2.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/2.png)
<details>
<summary>Prompt</summary>
<pre>
(best quality, 1girl, reimu hakurei, brown hair, red eyes, hogsks, hogwarts school uniform, slytherin, black robe, green scarf, perplexed, (gray vest:1.2), gray skirt, red ribbon, outside, snow, black-green robe
Negative prompt: 2girls, (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit, fewer digits, (extra arms:1.2), blue cloak,
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 478121638, Size: 568x768, Model: anything-v4.5-pruned, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.95, AddNet Weight B 1: 0.95
</pre> </details>
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/3.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/3.png)
<details>
<summary>Prompt</summary>
<pre>
best quality, 1girl, flandre scarlet, blonde hair, vampire, fangs, red eyes, hogsks, hogwarts school uniform, hufflepuff, black robe, yellow scarf, (:3:0.5), (gray vest:1.2), gray skirt, outside, snow, black-yellow robe, crystal wings, side ponytail
Negative prompt: 2girls, (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit, fewer digits, (extra arms:1.2), blue cloak,
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 1055056090, Size: 568x768, Model: anything-v4.5-pruned, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.95, AddNet Weight B 1: 0.95
</pre></details>
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/4.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/4.png)
<details>
<summary>Prompt</summary>
<pre>
best quality, 1girl, gawr gura, (loli:0.5), ravenclaw, hogsks, hogwarts school uniform, black robe, blue scarf, shark teeth, (:3:0.5), (gray vest:1.2),
Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by (bad-artist:1.0), bad-hands-5, (bad-image-v2-39000:1.0), extra digit, fewer digits, (extra arms:1.2),
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3622139475, Size: 568x768, Model: Qcha-hllv1, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.95, AddNet Weight B 1: 0.95, Hires upscale: 2, Hires steps: 15, Hires upscaler: 4x-AnimeSharp
</pre></details>
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/5.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/5.png)
<details>
<summary>Prompt</summary>
<pre>
best quality, 1girl, black hair, glasses, gryffindor, hogsks, hogwarts school uniform, black robe, red scarf, (scared), (gray vest:1.2), looking at viewer, evening, night, dark
Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by (bad-artist:1.0), bad-hands-5, (bad-image-v2-39000:1.0), extra digit, fewer digits, (extra arms:1.2),
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2895674484, Size: 568x768, Model: pastelmix-better-vae-fp32, Denoising strength: 0.74, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.9, AddNet Weight B 1: 0.9, Hires upscale: 1.8, Hires steps: 20, Hires upscaler: Latent (nearest-exact)
</pre></details>
<br><br>
# Model comparison
This is trained on base NAI so any models off of that should do fine.
[<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/grid.png" width="840" height="964">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/grid.png)
<br>
# Contact
If you have any questions, you can DM me on [twitter.](https://twitter.com/aojiru_pixiv)
My pixiv if you're up for lewds:
[Pixiv](https://www.pixiv.net/en/users/12336647) | 0f1b99020d0c1927b3c31d383b8fb093 |
QCRI/PropagandaTechniquesAnalysis-en-BERT | QCRI | bert | 5 | 7 | transformers | 2 | null | true | false | false | MIT | ['en'] | [] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['propaganda', 'bert'] | false | true | true | 3,000 | false |
Propaganda Techniques Analysis BERT
----
This model is a BERT based model to make predictions of propaganda techniques in
news articles in English. The model is described in
[this paper](https://propaganda.qcri.org/papers/EMNLP_2019__Fine_Grained_Propaganda_Detection.pdf).
## Model description
Please find propaganda definition here:
https://propaganda.qcri.org/annotations/definitions.html
You can also try the model in action here: https://www.tanbih.org/prta
### How to use
```python
>>> from transformers import BertTokenizerFast
>>> from .model import BertForTokenAndSequenceJointClassification
>>>
>>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
>>> model = BertForTokenAndSequenceJointClassification.from_pretrained(
>>> "QCRI/PropagandaTechniquesAnalysis-en-BERT",
>>> revision="v0.1.0",
>>> )
>>>
>>> inputs = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> sequence_class_index = torch.argmax(outputs.sequence_logits, dim=-1)
>>> sequence_class = model.sequence_tags[sequence_class_index[0]]
>>> token_class_index = torch.argmax(outputs.token_logits, dim=-1)
>>> tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0][1:-1])
>>> tags = [model.token_tags[i] for i in token_class_index[0].tolist()[1:-1]]
```
### BibTeX entry and citation info
```bibtex
@inproceedings{da-san-martino-etal-2019-fine,
title = "Fine-Grained Analysis of Propaganda in News Article",
author = "Da San Martino, Giovanni and
Yu, Seunghak and
Barr{\'o}n-Cede{\~n}o, Alberto and
Petrov, Rostislav and
Nakov, Preslav",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1565",
doi = "10.18653/v1/D19-1565",
pages = "5636--5646",
abstract = "Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda. Previous work has addressed propaganda detection at document level, typically labelling all articles from a propagandistic news outlet as propaganda. Such noisy gold labels inevitably affect the quality of any learning system trained on them. A further issue with most existing systems is the lack of explainability. To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. In particular, we create a corpus of news articles manually annotated at fragment level with eighteen propaganda techniques and propose a suitable evaluation measure. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.",
}
```
| 7f23bc649827bf205190f490bab09d7d |
sd-concepts-library/pantone-milk | sd-concepts-library | null | 11 | 0 | null | 9 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,456 | false | ### Pantone Milk on Stable Diffusion
This is the `<pantone-milk>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
![<pantone-milk> 6](https://cdn.discordapp.com/attachments/978476082065993758/1018719752090304552/unknown.png)
**mentioning 'isometric' and 'bottle' in your prompt is recommended.**
Here is the new concept you will be able to use as an `object`:
![<pantone-milk> 0](https://huggingface.co/sd-concepts-library/pantone-milk/resolve/main/concept_images/5.jpeg)
![<pantone-milk> 1](https://huggingface.co/sd-concepts-library/pantone-milk/resolve/main/concept_images/3.jpeg)
![<pantone-milk> 2](https://huggingface.co/sd-concepts-library/pantone-milk/resolve/main/concept_images/0.jpeg)
![<pantone-milk> 3](https://huggingface.co/sd-concepts-library/pantone-milk/resolve/main/concept_images/2.jpeg)
![<pantone-milk> 4](https://huggingface.co/sd-concepts-library/pantone-milk/resolve/main/concept_images/1.jpeg)
![<pantone-milk> 5](https://huggingface.co/sd-concepts-library/pantone-milk/resolve/main/concept_images/4.jpeg)
| 00b583c3bbea1aea1d65340d1fbb1192 |
AbidHasan95/movieHunt3-ner | AbidHasan95 | distilbert | 16 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,611 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movieHunt3-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 95 | 0.0462 |
| No log | 2.0 | 190 | 0.0067 |
| No log | 3.0 | 285 | 0.0028 |
| No log | 4.0 | 380 | 0.0018 |
| No log | 5.0 | 475 | 0.0014 |
| 0.1098 | 6.0 | 570 | 0.0012 |
| 0.1098 | 7.0 | 665 | 0.0011 |
| 0.1098 | 8.0 | 760 | 0.0010 |
| 0.1098 | 9.0 | 855 | 0.0010 |
| 0.1098 | 10.0 | 950 | 0.0009 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 0fc9198e5cda43cdd47ba30bf5ac3cab |
V3RX2000/xlm-roberta-base-finetuned-panx-fr | V3RX2000 | xlm-roberta | 10 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2651
- F1: 0.8355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5954 | 1.0 | 191 | 0.3346 | 0.7975 |
| 0.2689 | 2.0 | 382 | 0.2900 | 0.8347 |
| 0.1821 | 3.0 | 573 | 0.2651 | 0.8355 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| eacabb963b72ef831b46f379e41ff435 |
psrohith98/distilbert-base-uncased-finetuned-emotion | psrohith98 | distilbert | 18 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,485 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1500
- Accuracy: 0.932
- F1: 0.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7995 | 1.0 | 250 | 0.2679 | 0.9145 | 0.9122 |
| 0.2111 | 2.0 | 500 | 0.1731 | 0.9265 | 0.9265 |
| 0.1399 | 3.0 | 750 | 0.1517 | 0.938 | 0.9381 |
| 0.1118 | 4.0 | 1000 | 0.1500 | 0.932 | 0.9319 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 23ef16d5f83f5fe9b717795dd1665944 |
KISSz/wav2vac2-vee-train001-ASR | KISSz | wav2vec2 | 15 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | cc-by-sa-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,056 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vac2-vee-train001-ASR
This model is a fine-tuned version of [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cpu
- Datasets 1.11.0
- Tokenizers 0.10.3
| 67710dfec09e571b8cb68a45d32be289 |
sd-concepts-library/million-live-akane-15k | sd-concepts-library | null | 60 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 6,965 | false | ### million-live-akane-15k on Stable Diffusion
This is the `<akane>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<akane> 0](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/0.png)
![<akane> 1](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/1.png)
![<akane> 2](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/2.png)
![<akane> 3](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/3.png)
![<akane> 4](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/4.png)
![<akane> 5](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/5.png)
![<akane> 6](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/6.png)
![<akane> 7](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/7.png)
![<akane> 8](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/8.png)
![<akane> 9](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/9.png)
![<akane> 10](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/10.png)
![<akane> 11](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/11.png)
![<akane> 12](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/12.png)
![<akane> 13](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/13.png)
![<akane> 14](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/14.png)
![<akane> 15](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/15.png)
![<akane> 16](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/16.png)
![<akane> 17](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/17.png)
![<akane> 18](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/18.png)
![<akane> 19](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/19.png)
![<akane> 20](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/20.png)
![<akane> 21](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/21.png)
![<akane> 22](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/22.png)
![<akane> 23](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/23.png)
![<akane> 24](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/24.png)
![<akane> 25](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/25.png)
![<akane> 26](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/26.png)
![<akane> 27](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/27.png)
![<akane> 28](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/28.png)
![<akane> 29](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/29.png)
![<akane> 30](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/30.png)
![<akane> 31](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/31.png)
![<akane> 32](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/32.png)
![<akane> 33](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/33.png)
![<akane> 34](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/34.png)
![<akane> 35](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/35.png)
![<akane> 36](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/36.png)
![<akane> 37](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/37.png)
![<akane> 38](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/38.png)
![<akane> 39](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/39.png)
![<akane> 40](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/40.png)
![<akane> 41](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/41.png)
![<akane> 42](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/42.png)
![<akane> 43](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/43.png)
![<akane> 44](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/44.png)
![<akane> 45](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/45.png)
![<akane> 46](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/46.png)
![<akane> 47](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/47.png)
![<akane> 48](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/48.png)
![<akane> 49](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/49.png)
![<akane> 50](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/50.png)
![<akane> 51](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/51.png)
![<akane> 52](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/52.png)
![<akane> 53](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/53.png)
![<akane> 54](https://huggingface.co/sd-concepts-library/million-live-akane-15k/resolve/main/concept_images/54.png)
| 9be0681f456193ff64e338974e0ecf93 |
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s443 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 494 | false | # exp_w2v2r_es_xls-r_accent_surpeninsular-2_nortepeninsular-8_s443
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| abd3f9a500967b973484ae232d9a06a0 |
ryo-hsgw/distilbert-base-uncased-distilled-clinc | ryo-hsgw | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3397
- Accuracy: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.202 | 1.0 | 318 | 2.3610 | 0.7506 |
| 1.8112 | 2.0 | 636 | 1.1899 | 0.8610 |
| 0.9255 | 3.0 | 954 | 0.6534 | 0.9168 |
| 0.5268 | 4.0 | 1272 | 0.4620 | 0.9368 |
| 0.3624 | 5.0 | 1590 | 0.3941 | 0.9448 |
| 0.2935 | 6.0 | 1908 | 0.3682 | 0.9452 |
| 0.2584 | 7.0 | 2226 | 0.3515 | 0.9497 |
| 0.2393 | 8.0 | 2544 | 0.3453 | 0.9481 |
| 0.2289 | 9.0 | 2862 | 0.3421 | 0.9490 |
| 0.225 | 10.0 | 3180 | 0.3397 | 0.95 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| c37d76d096e5362e52b247895405aaed |
Sreek/xlm-roberta-base-finetuned-panx-all | Sreek | xlm-roberta | 10 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1233
- F1: 0.8820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1571 | 0.8381 |
| 0.1437 | 2.0 | 1430 | 0.1268 | 0.8717 |
| 0.0931 | 3.0 | 2145 | 0.1233 | 0.8820 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| ea1c909a758b7720caf6460abafd1eaf |
marcosgg/bert-base-gl-cased | marcosgg | bert | 7 | 310 | transformers | 1 | fill-mask | true | false | false | agpl-3.0 | ['gl', 'pt'] | null | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 | [] | false | true | true | 1,088 | false |
# BERT for Galician (Base)
This is a base pre-trained BERT model (12 layers, cased) for Galician (ILG/RAG spelling). It was evaluated on lexical semantics tasks, using a [dataset to identify homonymy and synonymy in context](https://github.com/marcospln/homonymy_acl21), and presented at ACL 2021.
There is also a small version (6 layers, cased): `marcosgg/bert-small-gl-cased`
## Citation
If you use this model, please cite the following [paper](https://arxiv.org/abs/2106.13553):
```
@inproceedings{garcia-2021-exploring,
title = "Exploring the Representation of Word Meanings in Context: {A} Case Study on Homonymy and Synonymy",
author = "Garcia, Marcos",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.281",
doi = "10.18653/v1/2021.acl-long.281",
pages = "3625--3640"
}
``` | a32efd3d199bb598be80d9d3f184d2d1 |
siddharthtumre/pubmedbert-finetuned-ner | siddharthtumre | bert | 10 | 4 | transformers | 0 | token-classification | true | false | false | mit | null | ['jnlpba'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,806 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-finetuned-ner
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the jnlpba dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3766
- Precision: 0.6877
- Recall: 0.7833
- F1: 0.7324
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1607 | 1.0 | 2319 | 0.2241 | 0.6853 | 0.7835 | 0.7311 | 0.9302 |
| 0.112 | 2.0 | 4638 | 0.2620 | 0.6753 | 0.7929 | 0.7294 | 0.9276 |
| 0.0785 | 3.0 | 6957 | 0.3014 | 0.6948 | 0.7731 | 0.7319 | 0.9268 |
| 0.055 | 4.0 | 9276 | 0.3526 | 0.6898 | 0.7801 | 0.7322 | 0.9268 |
| 0.0418 | 5.0 | 11595 | 0.3766 | 0.6877 | 0.7833 | 0.7324 | 0.9267 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| ac5fd23dbe8e82c1f373020cc41415b7 |
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test | Edomonndo | marian | 157 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,955 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ja-en-finetuned-ja-to-en_test
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4737
- Bleu: 80.2723
- Gen Len: 16.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.1237 | 1.0 | 247 | 0.6131 | 60.9383 | 16.4152 |
| 0.5395 | 2.0 | 494 | 0.5274 | 67.5705 | 16.2883 |
| 0.3584 | 3.0 | 741 | 0.5122 | 71.3098 | 16.3777 |
| 0.2563 | 4.0 | 988 | 0.4887 | 73.6639 | 16.401 |
| 0.138 | 5.0 | 1235 | 0.4796 | 76.7942 | 16.4873 |
| 0.0979 | 6.0 | 1482 | 0.4849 | 76.9404 | 16.6162 |
| 0.0792 | 7.0 | 1729 | 0.4806 | 78.9831 | 16.5442 |
| 0.0569 | 8.0 | 1976 | 0.4765 | 79.3461 | 16.4873 |
| 0.0299 | 9.0 | 2223 | 0.4751 | 79.7901 | 16.4863 |
| 0.0204 | 10.0 | 2470 | 0.4737 | 80.2723 | 16.5492 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
| 0a035df37d98807aee76338cfefb77e4 |
gayanin/bart-med-term-conditional-masking | gayanin | bart | 15 | 5 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,858 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5115
- Rouge2 Precision: 0.7409
- Rouge2 Recall: 0.5343
- Rouge2 Fmeasure: 0.6025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6278 | 1.0 | 15827 | 0.5546 | 0.7255 | 0.5244 | 0.5908 |
| 0.5356 | 2.0 | 31654 | 0.5286 | 0.7333 | 0.5293 | 0.5966 |
| 0.4757 | 3.0 | 47481 | 0.5154 | 0.7376 | 0.532 | 0.5998 |
| 0.4337 | 4.0 | 63308 | 0.5107 | 0.7406 | 0.5342 | 0.6023 |
| 0.4045 | 5.0 | 79135 | 0.5115 | 0.7409 | 0.5343 | 0.6025 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 3b6bcbe593483adb1580e714ac37fa94 |
Saulr/distilbert-base-uncased-finetuned-gender-classification | Saulr | distilbert | 10 | 178 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,353 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-gender-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8388
- Accuracy: 0.7856
- F1: 0.7855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4445 | 1.0 | 2015 | 0.5271 | 0.7846 | 0.7844 |
| 0.2534 | 2.0 | 4030 | 0.8388 | 0.7856 | 0.7855 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 999a430f5c093be1de443e1cfda78ec1 |
jayantapaul888/xlm-roberta-base-eng-only-sentiment-single-finetuned-memes | jayantapaul888 | xlm-roberta | 14 | 4 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,838 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-eng-only-sentiment-single-finetuned-memes
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5629
- Accuracy: 0.8652
- Precision: 0.8794
- Recall: 0.8786
- F1: 0.8789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 378 | 0.3506 | 0.8459 | 0.8647 | 0.8584 | 0.8605 |
| 0.4424 | 2.0 | 756 | 0.3264 | 0.8563 | 0.8818 | 0.8696 | 0.8689 |
| 0.2888 | 3.0 | 1134 | 0.3563 | 0.8578 | 0.8759 | 0.8701 | 0.8714 |
| 0.1889 | 4.0 | 1512 | 0.3939 | 0.8585 | 0.8733 | 0.8729 | 0.8730 |
| 0.1889 | 5.0 | 1890 | 0.4698 | 0.8622 | 0.8765 | 0.8761 | 0.8763 |
| 0.1136 | 6.0 | 2268 | 0.5629 | 0.8652 | 0.8794 | 0.8786 | 0.8789 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
| 4f96e0c955afd075dd216db89a877453 |
ramazank2000/turkishReviews-ds-mini1 | ramazank2000 | gpt2 | 10 | 4 | transformers | 0 | text-generation | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,504 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ramazank2000/turkishReviews-ds-mini1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.1811
- Validation Loss: 9.2643
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2935 | 9.9865 | 0 |
| 9.6639 | 9.6408 | 1 |
| 9.1811 | 9.2643 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
| cb4e2915da6bc0fc7cb8241682b9a63e |
google/multiberts-seed_0-step_120k | google | bert | 8 | 13 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_120k'] | false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 120k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 120k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_120k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_120k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| 2c911e724709507392670a48e2c0e6a6 |
zhichao158/wav2vec2-xls-r-common_voice-tr-ft | zhichao158 | wav2vec2 | 22 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['tr'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer'] | true | true | true | 1,863 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3736
- Wer: 0.2930
- Cer: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.5462 | 13.51 | 500 | 0.4423 | 0.4807 | 0.1188 |
| 0.342 | 27.03 | 1000 | 0.3781 | 0.3954 | 0.0967 |
| 0.2272 | 40.54 | 1500 | 0.3816 | 0.3595 | 0.0893 |
| 0.1805 | 54.05 | 2000 | 0.3943 | 0.3487 | 0.0854 |
| 0.1318 | 67.57 | 2500 | 0.3818 | 0.3262 | 0.0801 |
| 0.1213 | 81.08 | 3000 | 0.3777 | 0.3113 | 0.0758 |
| 0.0639 | 94.59 | 3500 | 0.3788 | 0.2953 | 0.0716 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.8.0
- Datasets 1.17.0
- Tokenizers 0.10.3
| 3c59c5f83f9966ccb4772c2482a625da |
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert | espnet | null | 21 | 18 | espnet | 0 | automatic-speech-recognition | false | false | false | cc-by-4.0 | ['en'] | ['iemocap'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | true | true | 30,953 | false |
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Feb 12 23:11:32 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer_hubert.yaml](conf/tuning/train_asr_conformer_hubert.yaml)
- token_type: word
- Sentiment Labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|66.5|76.4|75.7|
|decode_asr_model_valid.acc.ave_10best/test|1650|62.0|65.5|65.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_hubert.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_hubert_sentiment
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 5ec0f006d5d605bc3a29933060dd9776 |
ibrahim2030/tiny-bert-sst2-distilled | ibrahim2030 | bert | 80 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 990 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00013131868851033734
- train_batch_size: 128
- eval_batch_size: 128
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 35c0c2ef65c8b91f97140d726e14cc0c |
bookbot/distil-wav2vec2-xls-r-adult-child-cls-89m | bookbot | wav2vec2 | 10 | 3 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio-classification', 'generated_from_trainer'] | true | true | true | 2,560 | false |
# DistilWav2Vec2 XLS-R Adult/Child Speech Classifier 89M
DistilWav2Vec2 XLS-R Adult/Child Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a distilled version of [wav2vec2-xls-r-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-xls-r-adult-child-cls) on a private adult/child speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------------------- | ------- | ----- | ----------------------------------------- |
| `distil-wav2vec2-xls-r-adult-child-cls-89m` | 89M | XLS-R | Adult/Child Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| --------------------------------- | ------ | -------- | ------ |
| Adult/Child Speech Classification | 0.3048 | 93.54% | 0.9420 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 32
- `eval_batch_size`: 32
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 128
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.7711 | 1.0 | 96 | 0.5413 | 0.9017 | 0.9156 |
| 0.5551 | 2.0 | 192 | 0.4627 | 0.9164 | 0.9272 |
| 0.4166 | 3.0 | 288 | 0.3832 | 0.9261 | 0.9352 |
| 0.3928 | 4.0 | 384 | 0.3242 | 0.9331 | 0.9406 |
| 0.3622 | 5.0 | 480 | 0.3048 | 0.9354 | 0.9420 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
DistilWav2Vec2 XLS-R Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| b34885607b296b832ccaa7df7b2a4fd4 |
GItaf/BERT-FINETUNE-MBTI-CLS | GItaf | bert | 17 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 890 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-FINETUNE-MBTI-CLS
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| b1c465dbeba29d31478e62c27596a6d6 |
Helsinki-NLP/opus-mt-crs-es | Helsinki-NLP | marian | 10 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-crs-es
* source languages: crs
* target languages: es
* OPUS readme: [crs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.es | 26.1 | 0.445 |
| 4568aabe53dd9d2c7772a0f4578d5a03 |
lucasresck/bert-base-cased-ag-news | lucasresck | bert | 7 | 43 | transformers | 1 | text-classification | true | false | false | mit | ['en'] | ['ag_news'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'classification'] | false | true | true | 1,365 | false |
# bert-base-cased-ag-news
BERT model fine-tuned on AG News classification dataset using a linear layer on top of the [CLS] token output, with 0.945 test accuracy.
### How to use
Here is how to use this model to classify a given text:
```python
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('lucasresck/bert-base-cased-ag-news')
model = BertForSequenceClassification.from_pretrained('lucasresck/bert-base-cased-ag-news')
text = "Is it soccer or football?"
encoded_input = tokenizer(text, return_tensors='pt', truncation=True, max_length=512)
output = model(**encoded_input)
```
### Limitations and bias
Bias were not assessed in this model, but, considering that pre-trained BERT is known to carry bias, it is also expected for this model. BERT's authors say: "This bias will also affect all fine-tuned versions of this model."
## Evaluation results
```
precision recall f1-score support
0 0.9539 0.9584 0.9562 1900
1 0.9884 0.9879 0.9882 1900
2 0.9251 0.9095 0.9172 1900
3 0.9127 0.9242 0.9184 1900
accuracy 0.9450 7600
macro avg 0.9450 0.9450 0.9450 7600
weighted avg 0.9450 0.9450 0.9450 7600
```
| 1c2b28d1a0e68c3c993048a049aa6c52 |
aristotletan/roberta-base-finetuned-sst2 | aristotletan | roberta | 16 | 5 | transformers | 0 | text-classification | true | false | false | mit | null | ['scim'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,439 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the scim dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
- Accuracy: 0.9111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 2.0273 | 0.6667 |
| No log | 2.0 | 180 | 0.8802 | 0.8556 |
| No log | 3.0 | 270 | 0.5908 | 0.8889 |
| No log | 4.0 | 360 | 0.4632 | 0.9111 |
| No log | 5.0 | 450 | 0.4294 | 0.9111 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| 1a27ac13d5ff1dad80b196506e0cfde8 |
sd-concepts-library/ori | sd-concepts-library | null | 10 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,054 | false | ### Ori on Stable Diffusion
This is the `<Ori>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<Ori> 0](https://huggingface.co/sd-concepts-library/ori/resolve/main/concept_images/1.jpeg)
![<Ori> 1](https://huggingface.co/sd-concepts-library/ori/resolve/main/concept_images/2.jpeg)
![<Ori> 2](https://huggingface.co/sd-concepts-library/ori/resolve/main/concept_images/0.jpeg)
![<Ori> 3](https://huggingface.co/sd-concepts-library/ori/resolve/main/concept_images/3.jpeg)
![<Ori> 4](https://huggingface.co/sd-concepts-library/ori/resolve/main/concept_images/4.jpeg)
| ed4ac9b554063e80fe09a74f1ff1f391 |
sgangireddy/whisper-medium-cv-fi-3k | sgangireddy | whisper | 22 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fi'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,513 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium Finnish CV 4K
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 fi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3412
- Wer: 15.7369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0014 | 19.0 | 1000 | 0.3029 | 16.3117 |
| 0.0002 | 38.01 | 2000 | 0.3412 | 15.7369 |
| 0.0001 | 57.01 | 3000 | 0.3592 | 15.8783 |
| 0.0001 | 76.01 | 4000 | 0.3655 | 15.8594 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| cdd37d161fb2422243096f3bea1c9f6b |
vicky10011001/ddpm-butterflies-128 | vicky10011001 | null | 14 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,235 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/vicky10011001/ddpm-butterflies-128/tensorboard?#scalars)
| 4221127ddf2a2206eee958ceeae4f297 |
Geotrend/bert-base-en-ru-cased | Geotrend | bert | 8 | 993 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['multilingual'] | ['wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,292 | false |
# bert-base-en-ru-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ru-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ru-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
| ce24ef7f2f7ca08590869cc1a7a72b0e |
Geotrend/distilbert-base-en-fr-lt-no-pl-cased | Geotrend | distilbert | 6 | 5 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['multilingual'] | ['wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,251 | false |
# distilbert-base-en-fr-lt-no-pl-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-lt-no-pl-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-lt-no-pl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. | 8138dfb077c8c392a729e8316ff30bdd |