repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
Neprox/STT-Swedish-Whisper
Neprox
whisper
25
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sv']
null
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,850
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small - Swedish This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4312 - Wer: 19.0503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 18000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.0887 | 1.71 | 2000 | 0.2817 | 21.0831 | | 0.0168 | 3.41 | 4000 | 0.3108 | 19.6338 | | 0.0027 | 5.12 | 6000 | 0.3421 | 19.8731 | | 0.0012 | 6.83 | 8000 | 0.3713 | 19.1229 | | 0.0005 | 8.53 | 10000 | 0.3844 | 19.2036 | | 0.0004 | 10.24 | 12000 | 0.3900 | 19.0369 | | 0.0008 | 11.94 | 14000 | 0.4161 | 19.9511 | | 0.0002 | 13.65 | 16000 | 0.4201 | 19.1283 | | 0.0001 | 15.36 | 18000 | 0.4312 | 19.0503 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
2cdb8fc49c03c9be72ab28ceb8627f13
Helsinki-NLP/opus-mt-gil-en
Helsinki-NLP
marian
10
33
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-gil-en * source languages: gil * target languages: en * OPUS readme: [gil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.gil.en | 36.0 | 0.522 |
0313108026b963aa2125b851fac7c83e
maastrichtlawtech/legal-camembert
maastrichtlawtech
camembert
8
28
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['fr']
null
null
0
0
0
0
0
0
0
['legal']
false
true
true
1,284
false
# Legal-CamemBERT * Legal-CamemBERT is a [CamemBERT](https://huggingface.co/camembert-base)-based model further pre-trained on [23,000+ statutory articles](https://huggingface.co/datasets/maastrichtlawtech/bsard) from the Belgian legislation. * We chose the following training set-up: 50k training steps (200 epochs) with batches of 32 sequences of length 512 with an initial learning rate of 5e-5. * Training was performed on one Tesla V100 GPU with 32 GB using the [code](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) provided by Hugging Face. --- ### Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("maastrichtlawtech/legal-camembert") model = AutoModel.from_pretrained("maastrichtlawtech/legal-camembert") ``` ### About Us The [Maastricht Law & Tech Lab](https://www.maastrichtuniversity.nl/about-um/faculties/law/research/law-and-tech-lab) develops algorithms, models, and systems that allow computers to process natural language texts from the legal domain. Author: [Antoine Louis](https://antoinelouis.co) on behalf of the [Maastricht Law & Tech Lab](https://www.maastrichtuniversity.nl/about-um/faculties/law/research/law-and-tech-lab).
2c4b02515983e627fd481af194fa5906
Buseak/model_from_berturk_1401_v2
Buseak
bert
12
13
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,645
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_from_berturk_1401_v2 This model is a fine-tuned version of [Buseak/model_from_berturk_1401](https://huggingface.co/Buseak/model_from_berturk_1401) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1542 - Precision: 0.9414 - Recall: 0.9356 - F1: 0.9385 - Accuracy: 0.9569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 244 | 0.2277 | 0.9129 | 0.9058 | 0.9094 | 0.9362 | | No log | 2.0 | 488 | 0.1855 | 0.9275 | 0.9204 | 0.9240 | 0.9472 | | 0.2477 | 3.0 | 732 | 0.1602 | 0.9403 | 0.9315 | 0.9359 | 0.9554 | | 0.2477 | 4.0 | 976 | 0.1542 | 0.9414 | 0.9356 | 0.9385 | 0.9569 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
3a6db5c8085dcda73da4cdaf591bccd2
mvip/wav2vec2-large-xls-r-300m-tr
mvip
wav2vec2
13
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,720
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4074 - Wer: 0.4227 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9399 | 4.21 | 400 | 0.7252 | 0.7387 | | 0.4147 | 8.42 | 800 | 0.4693 | 0.5201 | | 0.1855 | 12.63 | 1200 | 0.4584 | 0.4848 | | 0.1256 | 16.84 | 1600 | 0.4464 | 0.4708 | | 0.0948 | 21.05 | 2000 | 0.4261 | 0.4389 | | 0.0714 | 25.26 | 2400 | 0.4331 | 0.4349 | | 0.0532 | 29.47 | 2800 | 0.4074 | 0.4227 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
f1be29580f983b924010104fe3c26dd7
JuandaBula/vit-model-juan-bula
JuandaBula
vit
13
21
transformers
0
image-classification
true
false
false
apache-2.0
null
['beans']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,223
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-model-juan-bula This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0077 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0067 | 3.85 | 500 | 0.0077 | 1.0 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cpu - Datasets 2.7.1 - Tokenizers 0.13.2
2c42d40ad86723d30cecbeae3b712830
BiggieW/chinese-bert-wwm-finetuned-chnsenticorp
BiggieW
bert
16
0
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,650
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-bert-wwm-finetuned-chnsenticorp This model is a fine-tuned version of [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm) on a small subset of chnsenticorp dataset. It achieves the following results on the evaluation set: - Loss: 3.0868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0096 | 1.0 | 15 | 3.7742 | | 1.7336 | 2.0 | 30 | 3.9102 | | 2.5286 | 3.0 | 45 | 3.4744 | | 2.8892 | 4.0 | 60 | 3.1142 | | 2.7188 | 5.0 | 75 | 2.7622 | | 2.7923 | 6.0 | 90 | 3.1119 | | 2.4094 | 7.0 | 105 | 3.0426 | | 2.5928 | 8.0 | 120 | 2.8928 | | 2.4072 | 9.0 | 135 | 2.9462 | | 2.4349 | 10.0 | 150 | 2.7645 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
7748c15a87b456d9620f39be1e077545
kadirnar/yolox_x-v0.1.1
kadirnar
null
3
0
null
0
object-detection
false
false
false
apache-2.0
null
['detection-datasets/coco']
null
0
0
0
0
0
0
0
['object-detection', 'computer-vision', 'yolox', 'yolov3', 'yolov5']
false
true
true
1,197
false
### Model Description [YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. [YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use. [Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) ### Installation ``` pip install yoloxdetect ``` ### Yolox Inference ```python from yoloxdetect import YoloxDetector from yolox.data.datasets import COCO_CLASSES model = YoloxDetector( model_path = "kadirnar/yolox_x-v0.1.1", config_path = "configs.yolox_x", device = "cuda:0", hf_model=True ) model.classes = COCO_CLASSES model.conf = 0.25 model.iou = 0.45 model.show = False model.save = True pred = model.predict(image='data/images', img_size=640) ``` ### BibTeX Entry and Citation Info ``` @article{yolox2021, title={YOLOX: Exceeding YOLO Series in 2021}, author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian}, journal={arXiv preprint arXiv:2107.08430}, year={2021} } ```
f665753619e435abfb01eabd95f51ce7
Helsinki-NLP/opus-mt-sv-mt
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-sv-mt * source languages: sv * target languages: mt * OPUS readme: [sv-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mt/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-mt/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mt/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mt/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.mt | 32.2 | 0.509 |
1f2d2e314f48696924514a3a8aa10131
jonatasgrosman/exp_w2v2t_th_xlsr-53_s218
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['th']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'th']
false
true
true
464
false
# exp_w2v2t_th_xlsr-53_s218 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
049a502c21e076082ef02c4e2e316097
ogimgio/bert-base-german-cased-issues-128-finetuned
ogimgio
bert
12
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,393
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-issues-128-finetuned This model is a fine-tuned version of [ogimgio/bert-base-german-cased-issues-128](https://huggingface.co/ogimgio/bert-base-german-cased-issues-128) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3858 - Micro f1: 0.6157 - Macro f1: 0.5597 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.4741 | 1.0 | 102 | 0.4254 | 0.5535 | 0.4051 | | 0.3799 | 2.0 | 204 | 0.3858 | 0.6157 | 0.5597 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
c29b5b847cd2496be69ce8bfbb677940
muhtasham/small-mlm-rotten_tomatoes
muhtasham
bert
10
1
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,012
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-rotten_tomatoes This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.944 | 0.47 | 500 | 3.7349 | | 3.8232 | 0.94 | 1000 | 3.5014 | | 3.6092 | 1.41 | 1500 | 3.4616 | | 3.6009 | 1.87 | 2000 | 3.5919 | | 3.5219 | 2.34 | 2500 | 3.4356 | | 3.4291 | 2.81 | 3000 | 3.4680 | | 3.3769 | 3.28 | 3500 | 3.4817 | | 3.3216 | 3.75 | 4000 | 3.4055 | | 3.3562 | 4.22 | 4500 | 3.4558 | | 3.2755 | 4.69 | 5000 | 3.4803 | | 3.2044 | 5.15 | 5500 | 3.3968 | | 3.2438 | 5.62 | 6000 | 3.4400 | | 3.2322 | 6.09 | 6500 | 3.4033 | | 3.0966 | 6.56 | 7000 | 3.3795 | | 3.1239 | 7.03 | 7500 | 3.4509 | | 3.0585 | 7.5 | 8000 | 3.3826 | | 2.9747 | 7.97 | 8500 | 3.4233 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
cb8fd57d981102de169ffa0192478a63
stanfordnlp/corenlp-german
stanfordnlp
null
3
0
null
0
null
false
false
false
gpl-2.0
['de']
null
null
0
0
0
0
0
0
0
['corenlp']
false
true
true
659
false
# Core NLP model for german CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations. Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP). This card and repo were automatically prepared with `hugging_corenlp.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-01-21 01:37:19.688
2faf0d65f1bdcdaeb7c105a326fed778
troesy/distilBERT-fresh
troesy
distilbert
12
14
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,504
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT-fresh This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1444 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 174 | 0.1957 | 0.0 | 0.0 | 0.0 | 0.9289 | | No log | 2.0 | 348 | 0.1591 | 0.0 | 0.0 | 0.0 | 0.9438 | | 0.2272 | 3.0 | 522 | 0.1444 | 0.0 | 0.0 | 0.0 | 0.9489 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
f867e49098e054bb663c60f01362bb07
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat
andi611
bert
13
5
transformers
0
question-answering
true
false
false
cc-by-4.0
['en']
['squad_v2', 'mit_restaurant']
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,146
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-mit-restaurant-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the mit_restaurant datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
d11ec724316a65d7fdb6c25b1a541069
SherlockGuo/distilbert-base-uncased-finetuned-squad
SherlockGuo
distilbert
12
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,279
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 3.7677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 4.1121 | | No log | 2.0 | 126 | 3.8248 | | No log | 3.0 | 189 | 3.7677 | ### Framework versions - Transformers 4.19.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
c1eaa7dcc2228c5049f9fa4b99547533
aioxlabs/dvoice-wolof
aioxlabs
wav2vec2
8
11
speechbrain
0
automatic-speech-recognition
true
false
false
apache-2.0
['wo']
['commonvoice']
null
0
0
0
0
0
0
0
['CTC', 'pytorch', 'speechbrain', 'Transformer']
false
true
true
6,446
false
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC/Attention trained on DVoice Wolof (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on a [ALFFA](https://github.com/besacier/ALFFA_PUBLIC) Wolof dataset within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). | DVoice Release | Val. CER | Val. WER | Test CER | Test WER | |:-------------:|:---------------------------:| -----:| -----:| -----:| | v2.0 | 4.81 | 16.25 | 4.83 | 16.05 | # Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions. - Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on the Darija dataset. The obtained final acoustic representation is given to the CTC greedy decoder. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. # Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read the SpeechBrain tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). # Transcribing your own audio files (in Wolof) ```python from speechbrain.pretrained import EncoderASR asr_model = EncoderASR.from_hparams(source="aioxlabs/dvoice-wolof", savedir="pretrained_models/asr-wav2vec2-dvoice-wol") asr_model.transcribe_file('./the_path_to_your_audio_file') ``` # Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. # Training To train the model from scratch, please see our GitHub tutorial [here](https://github.com/AIOXLABS/DVoice). # Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # Referencing SpeechBrain ``` @misc{SB2021, author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua }, title = {SpeechBrain}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}}, } ``` # About DVoice DVoice is a community initiative that aims to provide Africa low resources languages with data and models to facilitate their use of voice technologies. The lack of data on these languages makes it necessary to collect data using methods that are specific to each one. Two different approaches are currently used: the DVoice platforms ([https://dvoice.ma](https://dvoice.ma) and [https://dvoice.sn](https://dvoice.sn)), which are based on Mozilla Common Voice, for collecting authentic recordings from the community, and transfer learning techniques for automatically labeling recordings that are retrived from social medias. The DVoice platform currently manages 7 languages including Darija (Moroccan Arabic dialect) whose dataset appears on this version, Wolof, Mandingo, Serere, Pular, Diola and Soninke. For this project, AIOX Labs the SI2M Laboratory are joining forces to build the future of technologies together. # About AIOX Labs Based in Rabat, London and Paris, AIOX-Labs mobilizes artificial intelligence technologies to meet the business needs and data projects of companies. - He is at the service of the growth of groups, the optimization of processes or the improvement of the customer experience. - AIOX-Labs is multi-sector, from fintech to industry, including retail and consumer goods. - Business ready data products with a solid algorithmic base and adaptability for the specific needs of each client. - A complementary team made up of doctors in AI and business experts with a solid scientific base and international publications. Website: [https://www.aiox-labs.com/](https://www.aiox-labs.com/) # SI2M Laboratory The Information Systems, Intelligent Systems and Mathematical Modeling Research Laboratory (SI2M) is an academic research laboratory of the National Institute of Statistics and Applied Economics (INSEA). The research areas of the laboratories are Information Systems, Intelligent Systems, Artificial Intelligence, Decision Support, Network and System Security, Mathematical Modelling. Website: [SI2M Laboratory](https://insea.ac.ma/index.php/pole-recherche/equipe-de-recherche/150-laboratoire-de-recherche-en-systemes-d-information-systemes-intelligents-et-modelisation-mathematique) # About SpeechBrain SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. Website: https://speechbrain.github.io/ GitHub: https://github.com/speechbrain/speechbrain # Referencing SpeechBrain ``` @misc{SB2021, author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua }, title = {SpeechBrain}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}}, } ``` # Acknowledgements This research was supported through computational resources of HPC-MARWAN (www.marwan.ma/hpc) provided by CNRST, Rabat, Morocco. We deeply thank this institution.
97fafff705d890dac7c0608b201b7d1f
Arch4ngel/untitled_goose-goose
Arch4ngel
null
17
14
diffusers
1
text-to-image
true
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
true
true
772
false
# DreamBooth model for the untitled_goose concept trained by Arch4ngel on the Arch4ngel/untitled_goose_game dataset. This is a Stable Diffusion model fine-tuned on the untitled_goose concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of untitled_goose goose** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description Stable Diffusion model fine-tuned for generating Goose from Untitled Goose Game images. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Arch4ngel/untitled_goose-goose') image = pipeline().images[0] image ```
8ddb6b2b1e5f39145ebb9caac364b8d5
jonatasgrosman/exp_w2v2t_de_vp-nl_s283
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
469
false
# exp_w2v2t_de_vp-nl_s283 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
edb3b2bd82d3f007d789e7f9cdb16b40
gzinzi/miles
gzinzi
gpt_neo
12
2
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,245
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # miles This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.6360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | 10.7544 | | No log | 2.0 | 4 | 10.6614 | | No log | 3.0 | 6 | 10.6360 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
86f6ec06165a41112dfe0bb4b82de634
steja/whisper-small-tamil
steja
whisper
19
1
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ta']
['google/fleurs']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,590
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-tamil This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs dataset for Tamil. It achieves the following results on the evaluation set: - Loss: 0.42 - Wer: 15.02 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0882 | 2.27 | 500 | 0.2674 | 16.7354 | | 0.0026 | 11.76 | 1000 | 0.3508 | 15.3720 | | 0.0012 | 17.64 | 1500 | 0.3920 | 15.6156 | | 0.0009 | 23.53 | 2000 | 0.4076 | 15.4284 | | 0.0002 | 29.41 | 2500 | 0.4268 | 15.0215 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
f23ab62558640e09cb359b3461aea2b2
gkss/distilbert-base-uncased-finetuned-squad
gkss
distilbert
10
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
922
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0 - Datasets 2.2.1 - Tokenizers 0.12.1
8cdb5f146bddb75d9867b12c288ee263
AustinCarthy/phishing-bert-base-uncased-finetuned-dsV0
AustinCarthy
bert
11
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,546
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phishing-bert-base-uncased-finetuned-dsV0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0194 - Accuracy: 0.9966 - F1: 0.9632 - Precision: 0.9878 - Recall: 0.9397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0361 | 1.0 | 5185 | 0.0197 | 0.9950 | 0.9449 | 0.9911 | 0.9028 | | 0.0106 | 2.0 | 10370 | 0.0202 | 0.9959 | 0.9553 | 0.9940 | 0.9195 | | 0.0039 | 3.0 | 15555 | 0.0194 | 0.9966 | 0.9632 | 0.9878 | 0.9397 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
5a936eebb7e2bad5430e9247339b068b
AlekseyCalvin/Make_Putin_Queer_Please
AlekseyCalvin
clip_text_model
45
59
diffusers
0
text-to-image
true
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
1
0
1
['text-to-image']
false
true
true
1,172
false
### Queer Vladimir Putin Dreambooth SD Model Dreambooth model trained by A.C.T. SOON® with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! To generate custom images of a queer or/and trans alter-dimensional identities of the infamous reigning spook Vladimir Putin – use "trp" or "trp person" in your Stable Diffusion prompt during inference with this model. Among other crucial, yet oft neglected, documentary content available in the public sphere ("Putin finally appears in drag", "Putin plays piano in Bowie wig", "femme Putin", etc...) this model was fine-tuned on numerous distinct variants of the classic "queer Putin" meme which had once spread like wildfiring rainbows in response to the 2018 intensification of the Russian government's ruthlessly inhumane crackdowns on LGBTQ+ persons and communities . !
a77ee054c0bae6ca5fd6197e700be572
ghatgetanuj/albert-large-v2_cls_SentEval-CR
ghatgetanuj
albert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,520
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2_cls_SentEval-CR This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2767 - Accuracy: 0.9509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 189 | 0.2880 | 0.9124 | | No log | 2.0 | 378 | 0.3215 | 0.9097 | | 0.3335 | 3.0 | 567 | 0.2229 | 0.9309 | | 0.3335 | 4.0 | 756 | 0.2610 | 0.9442 | | 0.3335 | 5.0 | 945 | 0.2767 | 0.9509 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
aa1fa8fb4cd60034280c910724a9058a
unicamp-dl/ptt5-small-portuguese-vocab
unicamp-dl
t5
8
119
transformers
0
text2text-generation
true
true
true
mit
['pt']
['brWaC']
null
0
0
0
0
0
0
0
['t5', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
true
true
2,575
false
# Portuguese T5 (aka "PTT5") ## Introduction PTT5 is a T5 model pretrained in the BrWac corpus, a large collection of web pages in Portuguese, improving T5's performance on Portuguese sentence similarity and entailment tasks. It's available in three sizes (small, base and large) and two vocabularies (Google's T5 original and ours, trained on Portuguese Wikipedia). For further information or requests, please go to [PTT5 repository](https://github.com/unicamp-dl/PTT5). ## Available models | Model | Size | #Params | Vocabulary | | :-: | :-: | :-: | :-: | | [unicamp-dl/ptt5-small-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-small-t5-vocab) | small | 60M | Google's T5 | | [unicamp-dl/ptt5-base-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-base-t5-vocab) | base | 220M | Google's T5 | | [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) | large | 740M | Google's T5 | | [unicamp-dl/ptt5-small-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-small-portuguese-vocab) | small | 60M | Portuguese | | **[unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab)** **(Recommended)** | **base** | **220M** | **Portuguese** | | [unicamp-dl/ptt5-large-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-large-portuguese-vocab) | large | 740M | Portuguese | ## Usage ```python # Tokenizer from transformers import T5Tokenizer # PyTorch (bare model, baremodel + language modeling head) from transformers import T5Model, T5ForConditionalGeneration # Tensorflow (bare model, baremodel + language modeling head) from transformers import TFT5Model, TFT5ForConditionalGeneration model_name = 'unicamp-dl/ptt5-base-portuguese-vocab' tokenizer = T5Tokenizer.from_pretrained(model_name) # PyTorch model_pt = T5ForConditionalGeneration.from_pretrained(model_name) # TensorFlow model_tf = TFT5ForConditionalGeneration.from_pretrained(model_name) ``` # Citation If you use PTT5, please cite: @article{ptt5_2020, title={PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data}, author={Carmo, Diedre and Piau, Marcos and Campiotti, Israel and Nogueira, Rodrigo and Lotufo, Roberto}, journal={arXiv preprint arXiv:2008.09144}, year={2020} }
5d2e5cdb5ddb61cf262b6d1529f9d12e
jonatasgrosman/exp_w2v2t_pl_vp-fr_s932
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pl']
false
true
true
469
false
# exp_w2v2t_pl_vp-fr_s932 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a86ca6e875d6cdef5d9f87c18a40ee93
deepparag/Aeona-Beta
deepparag
gpt2
9
3
transformers
2
conversational
true
false
false
mit
null
null
null
2
1
1
0
1
1
0
['conversational']
false
true
true
4,147
false
# Aeona | Chatbot ![Aeona Banner](https://github.com/deepsarda/Aeona/blob/master/dashboard/static/banner.png?raw=true) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot. Using an AIML Chatbot will allow you to hardcode some replies also. # AEONA Aeona is an chatbot which hope's to be able to talk with humans as if its an friend! It's main target platform is discord. You can invite the bot [here](https://aeona.xyz). To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyx/). Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user. ## Goals The goal is to create an AI which will work with AIML in order to create the most human like AI. #### Why not an AI on its own? For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code! The goal of the AI is to generate responses where the AIML fails. Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible! So we use 3 dataset:- 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines! 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages! 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time! ## Training The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated. This leads to them covering each others issues! The AI has a context of 6 messages which means it will reply until the 4th message from user. [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1) ## Tips for Hugging Face interference I recommend send the user input, previous 3 AI and human responses. Using more context than this will lead to useless responses but using less is alright but the responses may be random. ## Evaluation Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics. | Model | Perplexity | |---|---| | Seq2seq Baseline [3] | 29.8 | | Wolf et al. [5] | 16.3 | | GPT-2 baseline | 99.5 | | DialoGPT baseline | 56.6 | | DialoGPT finetuned | 11.4 | | PersonaGPT | 10.2 | | **Aeona** | **7.9** | ## Usage Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona") model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
1bdd8cb473a2c59060cc0612a96adf1e
lorenzoscottb/bert-base-cased-PLANE-ood-2
lorenzoscottb
bert
10
22
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['lorenzoscottb/PLANE-ood']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,537
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT for PLANE classification This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on one of the PLANE's dataset split (no.2), introduced in [Bertolini et al., COLING 2022](https://aclanthology.org/2022.coling-1.359/) It achieves the following results on the evaluation set: - Accuracy: 0.9043 ## Model description The model is trained to perform a sequence classification task over phrase-level adjective-noun inferences (e.g., "A red car is a vehicle"). ## Intended uses & limitations The scope of the model is not to run lexical entailment (i.e., hypernym detection). The model is trained solely to perform a very specific subset of phrase-level entailment, based on adjective-nouns phrases. The type of question you should ask the model are limited, and should have one of three forms: - An *Adjective-Noun* is a *Noun* (e.g. A red car is a car) - An *Adjective-Noun* is a *Hypernym(Noun)* (e.g. A red car is a vehicle) - An *Adjective-Noun* is a *Adjective-Hypernym(Noun)* (e.g. A red car is a red vehicle) Linguistically speaking, adjectives belong to three macro classes (intersective, subsective, and intensional). From a linguistic and logical stand, these class shape the truth value of the three forms above. For instance, since red is an intersective adjective, the three from are all true. A subjective adjective like small allows just the first two, but not the last – that is, logically speaking, a small car is not a small vehicle. In other words, the model was built to study out-of-distribution compositional generalisation with respect to a very specific set of compositional phenomena. This poses clear limitations to the question you can ask the model. For instance, if you had to query the model with a basic (false) hypernym detection task (e.g., *A dog is a cat*), the model will consider it as true. ## Training and evaluation data The data used for training and testing, as well as the other splits used for the experiments, are available on the paper's git page [here](https://github.com/lorenzoscottb/PLANE). The reported accuracy reference to out-of-distribution evaluation. that is, the model was tested to perform text classification as presented but on unknown adjectives and nouns. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.12.1 # Cite if you want to use the model or data in your work please reference the paper too ``` @inproceedings{bertolini-etal-2022-testing, title = "Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment", author = "Bertolini, Lorenzo and Weeds, Julie and Weir, David", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.359", pages = "4084--4100", } ```
7fe1fafe8410c9ab82170efcd93ba13d
lakssrini/dpt-lvngrooms
lakssrini
null
17
331
diffusers
0
null
true
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'stable-diffusion', 'depth-to-image', 'diffusion-models-class']
false
true
true
773
false
# DreamBooth model for the lvngrooms concept trained by lakssrini on the custom real estate listings dataset. This is a Stable Diffusion inpainting model fine-tuned on the lvngrooms concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of lvngrooms room** ## Description This is a Stable Diffusion depth 2 image model fine-tuned on `room` images. ## Usage ```python from diffusers import StableDiffusionDepth2ImgPipeline pipeline = StableDiffusionPipeline.from_pretrained('lakssrini/dpt-lvngrooms') init_image = Image.open("XXX") image = pipeline( prompt=prompt.strip(), image=init_image, negative_prompt="Oversaturated, blurry, low quality", guidance_scale=guidance_scale, height=480, width=640 ).images[0] image ```
dbef9b3cf4adef0e9b5020da66c48c6d
Prajeevan/samantharuth
Prajeevan
null
34
6
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
1,925
false
### samantharuth Dreambooth model trained by Prajeevan with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: samantharuth (use that on your prompt) ![samantharuth 0](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%281%29.jpg)![samantharuth 1](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%282%29.jpg)![samantharuth 2](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%283%29.jpg)![samantharuth 3](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%284%29.jpg)![samantharuth 4](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%285%29.jpg)![samantharuth 5](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%286%29.jpg)![samantharuth 6](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%287%29.jpg)![samantharuth 7](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%288%29.jpg)![samantharuth 8](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%289%29.jpg)![samantharuth 9](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%2810%29.jpg)![samantharuth 10](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%2811%29.jpg)![samantharuth 11](https://huggingface.co/Prajeevan/samantharuth/resolve/main/concept_images/samantharuth_%2812%29.jpg)
ed585dfc04ebb34146ad4e9feeff9465
infinitejoy/wav2vec2-large-xls-r-300m-marathi-cv8
infinitejoy
wav2vec2
18
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['mr']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'mr', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
1,524
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-marathi-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the evaluation set: - Loss: 0.6483 - Wer: 0.6049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.671 | 22.73 | 500 | 1.3618 | 0.9499 | | 1.1599 | 45.45 | 1000 | 0.6330 | 0.6627 | | 0.8252 | 68.18 | 1500 | 0.6226 | 0.6426 | | 0.6424 | 90.91 | 2000 | 0.6359 | 0.6041 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
977bb8a249591b3d7e332c529807bff3
espnet/ftshijt_espnet2_asr_dsing_transformer
espnet
null
33
3
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['noinfo']
['dsing']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
10,072
false
## ESPnet2 ASR model ### `espnet/ftshijt_espnet2_asr_dsing_transformer` This model was trained by jiatong using dsing recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/dsing/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_dsing_transformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Mar 20 00:28:37 EDT 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.10.1` - Git hash: `c1ed71c6899e54c0b3dad82687886b1183cd0885` - Commit date: `Wed Mar 16 23:34:49 2022 -0400` ## asr_train_asr_raw_bpe500_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/dev|482|4018|77.0|16.2|6.8|4.0|27.0|65.1| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/test|480|4632|76.1|17.3|6.6|3.7|27.6|57.7| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/dev|482|18692|85.0|5.8|9.2|4.2|19.2|65.1| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/test|480|21787|84.9|6.3|8.8|4.2|19.3|57.7| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/dev|482|6097|75.2|12.8|12.0|4.1|28.9|65.1| |decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_valid.acc.ave/test|480|7736|75.3|14.3|10.4|4.1|28.8|57.7| ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_raw_bpe500_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: 15 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_bpe500_sp/train/speech_shape - exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train30_sp/wav.scp - speech - kaldi_ark - - dump/raw/train30_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ▁I - '''' - ▁YOU - S - T - ▁THE - M - ▁ME - ▁A - ▁AND - ▁TO - E - A - ING - D - ▁MY - ▁ - O - ▁IT - I - N - RE - Y - ▁BE - ▁IN - ▁ON - ▁LOVE - U - ▁WE - LL - H - ▁YOUR - ▁S - IN - ▁OF - ▁DO - ▁THAT - ▁ALL - L - ▁DON - ▁OH - ▁LIKE - ▁KNOW - ▁FOR - ▁CAN - ▁JUST - P - ▁BUT - ED - K - ▁WHEN - ▁SO - R - ▁GO - ▁WHAT - ▁C - ▁WITH - W - ▁F - C - ▁NO - ER - ▁ONE - ▁LET - VE - ES - ▁NOW - ▁BABY - G - ▁GOT - ▁COME - CAUSE - LE - B - ▁B - AR - ▁UP - ▁' - ▁W - ▁SEE - ▁TIME - ▁ARE - ▁G - ▁LOOK - ▁THIS - F - ▁IS - ▁NEVER - ▁M - ▁P - AN - ▁WAS - ▁WAY - ▁IF - OR - ▁SAY - V - ▁R - ▁T - ▁DOWN - RA - ▁THERE - ▁HEART - ▁NOT - RO - ▁WILL - ▁OUT - CE - ▁WANT - ▁YEAH - ▁HAVE - ▁GIVE - ▁TOO - ▁GONNA - ▁HOW - ▁NEED - ▁GET - ▁TAKE - ▁EVERY - ▁FEEL - ▁HE - EN - ▁FROM - ▁HA - ▁K - ▁SHE - 'ON' - ▁DI - RI - ▁ONLY - NE - ▁WHO - ▁AWAY - ▁E - ▁D - ▁LIFE - ▁MAKE - IC - ▁BACK - ▁WHERE - ▁MADE - ▁DAY - ▁HERE - ▁LO - ▁HER - ▁AS - ▁GOOD - ▁WANNA - ▁OOH - ▁TELL - LY - TH - ▁WON - ▁LIGHT - ▁KEEP - ▁MA - ▁LA - ▁SH - ▁WORLD - ▁MORE - ▁LI - AL - ▁COULD - ▁GIRL - ▁NOTHING - ▁EVER - ▁THINK - IE - ▁BY - ▁AT - ▁TONIGHT - ▁THEY - ▁CALL - ▁HO - ▁WOULD - IL - ▁OUR - ▁FALL - ▁NIGHT - ▁THAN - ▁DE - ▁SOME - ▁WAIT - ▁RIGHT - ▁RE - ▁HALLELUJAH - ▁TH - NG - ▁CO - ▁WERE - ▁TALK - ET - ▁BO - ▁HOLD - UR - ▁BEEN - ▁US - ▁PA - VER - ▁EYES - ▁DREAM - ▁SONG - ▁SHOULD - ▁STILL - ▁OVER - TA - ▁ANYMORE - IGHT - ▁STAY - ▁BETTER - LESS - ▁THROUGH - ▁LITTLE - X - ▁GONE - ▁AIN - ▁DA - ▁HOLDING - ▁HURT - ▁TRY - ▁FIND - Z - DE - ▁LAST - ▁SAID - ▁ALWAYS - ▁BODY - ▁MIND - ▁CRY - ▁EVEN - ▁RUN - ▁HOPE - ▁WITHOUT - ▁MISS - ▁ABOUT - ▁HAND - ▁J - ▁AGAIN - ▁THOUGH - ▁NAH - ▁LIVE - ▁BA - ▁OLD - ▁HEAD - ▁FIRE - ▁MAN - ▁SOMETHING - ▁WHY - THER - ▁HOME - ▁OR - ▁INSIDE - ▁NEW - ▁HEY - TION - ▁EVERYTHING - ▁HAD - ▁SOMETIMES - ▁HARD - ▁TOUCH - ▁HEAR - ▁AM - ▁MUCH - ▁LONG - ▁STAR - GETTING - ▁WALK - ▁PEOPLE - ▁BEFORE - ▁CLOSE - ▁TWO - ▁FAR - ▁SHOW - ▁STAND - ▁LOSE - ▁HELP - ▁NAME - ▁BOY - ▁TRUE - ▁PLAY - ▁DARK - ▁THINGS - ▁NA - ▁TEAR - ▁END - ▁NOBODY - ▁SEA - ▁ROCKABYE - ▁BELIEVE - ▁BROKE - ▁AROUND - ▁START - ▁KISS - ▁FEELING - ▁BREAK - ▁SOMEONE - ▁FRIEND - ▁ALONE - ▁BEAUTIFUL - ▁CRAZY - ▁OWN - OSE - ▁STOP - ▁LOST - ▁HIM - ▁BAD - ▁CHANCE - ▁REALLY - ▁WISH - ▁MOVE - ▁SKY - ▁PLACE - AKE - ▁LEAVE - ▁YA - ▁STRONG - ▁PUT - ▁OPEN - ▁WRONG - ▁COLD - OCK - ▁USED - ▁FOUND - ▁LONELY - ▁DANCE - EACH - ▁ANOTHER - ▁SIDE - ▁UNDER - ▁MATTER - ▁THESE - ▁CARE - ▁MINE - ▁SHINE - ▁AFRAID - ▁TURN - ▁PLEASE - ▁SUN - ▁DIAMOND - ▁UNTIL - ▁FACE - ▁LEARN - ▁TRUST - ▁WONDER - ▁BREATH - ATE - ▁SORRY - ▁HU - ▁WATCH - ▁LATE - ROUND - ▁ARMS - ▁PERFECT - ▁MAYBE - ▁PULL - ▁REMEMBER - ▁FIGHT - ▁MYSELF - ▁INTO - ▁DARLING - ▁THUNDER - ▁FOLLOW - ▁REASON - ▁BURN - ▁HIS - ▁MUST - ▁FREE - ▁FLASHLIGHT - ▁1 - ▁ENOUGH - ▁DRINK - ▁WORDS - ▁HIDE - ▁UN - ▁FORGET - ▁SURE - ▁CHANGE - ▁SMILE - ▁PROMISE - ▁FOREVER - '2' - ▁SWEET - ▁SAME - ▁OOOH - ▁PART - ▁SOMEBODY - NESS - ▁BRIGHT - ▁HEAVEN - ▁DEEP - ▁HIGH - ▁INSTEAD - ▁MOMENT - ▁ALONG - ▁ALRIGHT - ▁SLOW - ▁TOMORROW - ▁SOUL - ▁QU - ▁PUSH - ▁CHANDELIER - ▁LEFT - SIDE - ▁TOLD - ▁KNEW - READY - ▁LOVING - ▁SAW - '3' - ▁WORK - ▁DANCING - ▁THREE - ▁SAVE - ▁SHOOT - ▁LEAD - ▁SKI - ▁WILD - ▁WIND - ▁WHILE - ▁EDGE - ▁HAPPY - ▁FEAR - STUCK - ▁MOST - ▁LISTEN - ▁WOAH - ▁FIRST - ▁JOLENE - ▁VOICE - ▁COMP - ▁MILLION - FUL - ▁OOOOOH - ▁CAME - ▁RISE - ▁NEXT - ▁COUNT - ▁MOUNTAIN - ▁ROOM - ▁BLUE - ▁HIT - ▁RAISE - J - ▁THOUSAND - ▁SHAP - ▁TREAT - ▁DRY - ▁FINALLY - ▁TITANIUM - ▁CARRY - ▁TRUTH - ▁WATER - ▁MORNING - TIME - ▁BELONG - ▁UMA - ▁ALIVE - ▁ELSE - ▁ANGEL - ▁BRAND - ▁APART - ▁EVERYBODY - ▁SOUND - ▁GUESS - ▁PRAY - ▁FAITH - ▁AFTER - ▁THROW - ▁TRIED - ▁SLEEP - ▁FOOL - ▁DISCOVERING - ▁FUCK - ▁TASTE - ▁UNDERSTAND - ▁SHAME - ▁POWER - ▁WELCOME - ▁FELT - ▁SAFE - ▁DESERVE - ▁GAME - ▁SUPERMA - ▁SWEAR - ▁BETWEEN - ▁GLASS - ▁CATCH - ▁TOGETHER - '0' - '4' - '6' - '5' - '1' - '8' - '7' - '9' - Q - <sos/eos> init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_bpe500_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
98aaedae74981b85006155c801e64812
yunsizhang/distilbert-base-uncased-finetuned-emotion
yunsizhang
distilbert
12
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
0
1
0
0
0
0
['generated_from_trainer']
true
true
true
1,337
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2292 - Accuracy: 0.926 - F1: 0.9259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8732 | 1.0 | 250 | 0.3363 | 0.903 | 0.9002 | | 0.2645 | 2.0 | 500 | 0.2292 | 0.926 | 0.9259 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
3b31bc8fc262f2a96364176f934c83f7
KoichiYasuoka/roberta-base-thai-spm-ud-head
KoichiYasuoka
roberta
20
15
transformers
0
question-answering
true
false
false
apache-2.0
['th']
['universal_dependencies']
null
0
0
0
0
0
0
0
['thai', 'question-answering', 'dependency-parsing']
false
true
true
3,604
false
# roberta-base-thai-spm-ud-head ## Model Description This is a DeBERTa(V2) model pretrained on Thai Wikipedia texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="กว่า",context="หลายหัวดีกว่าหัวเดียว")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="# text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/roberta-base-thai-spm-ud-head") print(nlp("หลายหัวดีกว่าหัวเดียว")) ```
5b0bc9e69251f6353933b493e2ae8cae
anas-awadalla/roberta-large-few-shot-k-64-finetuned-squad-seed-2
anas-awadalla
roberta
17
3
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
985
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-few-shot-k-64-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
4997b8b901375011ecaed5999177b4d2
luoyixin/marian-finetuned-kde4-en-to-zh
luoyixin
marian
17
2
transformers
0
translation
true
false
false
apache-2.0
null
['kde4']
null
0
0
0
0
0
0
0
['translation', 'generated_from_trainer']
true
true
true
1,075
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9338 - Bleu: 40.6780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
33bd65401259343aef4d8e070bf412f5
alxdfy/noggles9000
alxdfy
null
20
2
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image']
false
true
true
1,345
false
### noggles9000 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook #### Model by alxdfy This your the Stable Diffusion model fine-tuned the noggles9000 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **nounfootball.jpg** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: nounfootball.jpg ![nounfootball.jpg 0](https://huggingface.co/alxdfy/noggles9000/resolve/main/concept_images/nounfootball.jpg)
d12c0e42e5fe699420d007b1bf61433d
TheRensselaerIDEA/gpt2-large-covid-tweet-response
TheRensselaerIDEA
gpt2
12
2
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,668
false
Base model: [gpt2-large](https://huggingface.co/gpt2-large) Fine-tuned to generate responses on a dataset of [COVID-19 public health tweets](https://github.com/TheRensselaerIDEA/generative-response-modeling). For more information about the dataset, task and training, see [our paper](https://arxiv.org/abs/2204.04353). This checkpoint corresponds to the lowest validation perplexity (3.36 at 2 epochs) seen during training. See Training metrics for Tensorboard logs. Also see: our [Vaccine public health tweet response model](https://huggingface.co/TheRensselaerIDEA/gpt2-large-vaccine-tweet-response). **Data input format:** <span style="color:red"><|message|></span>public health message<span style="color:red"><|author|></span>public health Twitter handle<span style="color:red"><|response|></span> Example: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from transformers.trainer_utils import set_seed import torch tokenizer = AutoTokenizer.from_pretrained("TheRensselaerIDEA/gpt2-large-covid-tweet-response") model = AutoModelForCausalLM.from_pretrained("TheRensselaerIDEA/gpt2-large-covid-tweet-response") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) set_seed(33) message = "Is your child worried about #COVID19? Learn the facts so you can answer your children’s questions." author = "CDCgov" num_responses = 2 author_token, message_token, response_token = tokenizer.additional_special_tokens input_str = f"{message_token}{message}{author_token}{author}{response_token}" inputs = tokenizer(input_str, return_tensors="pt").to(device) responses_ids = model.generate(**inputs, max_new_tokens=100, pad_token_id=tokenizer.pad_token_id, do_sample=True, top_p=0.95, temperature=1.5, num_beams=3, early_stopping=True, num_return_sequences=num_responses) responses = [tokenizer.decode(r[inputs.input_ids.shape[-1]:], skip_special_tokens=True) for r in responses_ids] for i, resp in enumerate(responses): print(f"Response {i}: {resp}\n") ``` Output: ``` Response 0: @CDCgov I'm not worried. I don't know who needs to hear this, but I have a feeling I know who will be listening. It is not the virus. It is the media. I know you and CDC have been lying for months now, but the media will keep pushing this lie. Response 1: #WashYourHands to help #StopTheSpread of #COVID19 and other diseases. Learn more about hand washing: #HandWashing ```
995021ffcb3aa3f1f94e6c2c934ce03a
l3cube-pune/marathi-bert
l3cube-pune
bert
8
6
transformers
0
fill-mask
true
false
false
cc-by-4.0
['mr']
['L3Cube-MahaCorpus']
null
0
0
0
0
0
0
0
[]
false
true
true
978
false
## MahaBERT MahaBERT is a Marathi BERT model. It is a multilingual BERT (bert-base-multilingual-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159) New version of this model is available here: https://huggingface.co/l3cube-pune/marathi-bert-v2 ``` @InProceedings{joshi:2022:WILDRE6, author = {Joshi, Raviraj}, title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources}, booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {97--101} } ```
089414add86b2572d56cb8e8688b5d07
9pinus/macbert-base-chinese-medical-collation
9pinus
bert
14
17
transformers
4
token-classification
true
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
['Token Classification']
false
true
true
1,346
false
## Model description This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset. ## Intended uses & limitations You can use this model directly with a pipeline for token classification: ```python >>> from transformers import (AutoModelForTokenClassification, AutoTokenizer) >>> from transformers import pipeline >>> hub_model_id = "9pinus/macbert-base-chinese-medical-collation" >>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id) >>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id) >>> classifier = pipeline('ner', model=model, tokenizer=tokenizer) >>> result = classifier("如果病情较重,可适当口服甲肖唑片、环酯红霉素片等药物进行抗感染镇痛。") >>> for item in result: >>> if item['entity'] == 1: >>> print(item) {'entity': 1, 'score': 0.58127016, 'index': 14, 'word': '肖', 'start': 13, 'end': 14} ``` ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
24c48a6d5b1496987993935570435a7d
blmnk/distilbert-base-cased-finetuned-news
blmnk
distilbert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
924
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-finetuned-news This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu116 - Datasets 2.5.2 - Tokenizers 0.12.1
7c6cef472aa99e3c170273e7f27ff7d8
enoriega/rule_learning_test
enoriega
bert
22
0
transformers
0
null
true
false
false
apache-2.0
null
['enoriega/odinsynth_dataset']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,673
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rule_learning_test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.1255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 1000 - total_train_batch_size: 8000 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1764 | 0.32 | 20 | 0.2303 | | 0.145 | 0.64 | 40 | 0.1470 | | 0.129 | 0.96 | 60 | 0.1321 | | 0.1256 | 1.29 | 80 | 0.1265 | | 0.1304 | 1.61 | 100 | 0.1252 | | 0.1235 | 1.93 | 120 | 0.1260 | | 0.125 | 2.26 | 140 | 0.1261 | | 0.1263 | 2.58 | 160 | 0.1262 | | 0.1244 | 2.9 | 180 | 0.1256 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.1 - Tokenizers 0.12.1
0fef32cac958a303d9278e923d9cbc7b
akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final
akashsivanandan
wav2vec2
12
9
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,115
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tamil-colab-final This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7539 - Wer: 0.6135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 11.1466 | 1.0 | 118 | 4.3444 | 1.0 | | 3.4188 | 2.0 | 236 | 3.2496 | 1.0 | | 2.8617 | 3.0 | 354 | 1.6165 | 1.0003 | | 0.958 | 4.0 | 472 | 0.7984 | 0.8720 | | 0.5929 | 5.0 | 590 | 0.6733 | 0.7831 | | 0.4628 | 6.0 | 708 | 0.6536 | 0.7621 | | 0.3834 | 7.0 | 826 | 0.6037 | 0.7155 | | 0.3242 | 8.0 | 944 | 0.6376 | 0.7184 | | 0.2736 | 9.0 | 1062 | 0.6214 | 0.7070 | | 0.2433 | 10.0 | 1180 | 0.6158 | 0.6944 | | 0.2217 | 11.0 | 1298 | 0.6548 | 0.6830 | | 0.1992 | 12.0 | 1416 | 0.6331 | 0.6775 | | 0.1804 | 13.0 | 1534 | 0.6644 | 0.6874 | | 0.1639 | 14.0 | 1652 | 0.6629 | 0.6649 | | 0.143 | 15.0 | 1770 | 0.6927 | 0.6836 | | 0.1394 | 16.0 | 1888 | 0.6933 | 0.6888 | | 0.1296 | 17.0 | 2006 | 0.7039 | 0.6860 | | 0.1212 | 18.0 | 2124 | 0.7042 | 0.6628 | | 0.1121 | 19.0 | 2242 | 0.7132 | 0.6475 | | 0.1069 | 20.0 | 2360 | 0.7423 | 0.6438 | | 0.1063 | 21.0 | 2478 | 0.7171 | 0.6484 | | 0.1025 | 22.0 | 2596 | 0.7396 | 0.6451 | | 0.0946 | 23.0 | 2714 | 0.7400 | 0.6432 | | 0.0902 | 24.0 | 2832 | 0.7385 | 0.6286 | | 0.0828 | 25.0 | 2950 | 0.7368 | 0.6286 | | 0.079 | 26.0 | 3068 | 0.7471 | 0.6306 | | 0.0747 | 27.0 | 3186 | 0.7524 | 0.6201 | | 0.0661 | 28.0 | 3304 | 0.7576 | 0.6201 | | 0.0659 | 29.0 | 3422 | 0.7579 | 0.6130 | | 0.0661 | 30.0 | 3540 | 0.7539 | 0.6135 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
2836dca6f88f453a3f646369c4e203ee
tzvc/b3d0ef12-11d6-43df-8a96-ebcb5ca71ea1
tzvc
null
28
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image']
false
true
true
779
false
### training params ```json { "pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5", "instance_data_dir": "./b3d0ef12-11d6-43df-8a96-ebcb5ca71ea1/instance_data", "class_data_dir": "./class_data/person", "output_dir": "./b3d0ef12-11d6-43df-8a96-ebcb5ca71ea1/", "train_text_encoder": true, "with_prior_preservation": true, "prior_loss_weight": 1.0, "instance_prompt": "me", "class_prompt": "person", "resolution": 512, "train_batch_size": 1, "gradient_accumulation_steps": 1, "gradient_checkpointing": true, "use_8bit_adam": true, "learning_rate": 1e-06, "lr_scheduler": "polynomial", "lr_warmup_steps": 0, "num_class_images": 500, "max_train_steps": 1050, "mixed_precision": "fp16" } ```
009063d69a023666beb17fecf510d539
gngpostalsrvc/BERiT_2000_custom_architecture_40_epochs_ls_.2
gngpostalsrvc
roberta
11
2
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
12,064
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERiT_2000_custom_architecture_40_epochs_ls_.2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.3120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - label_smoothing_factor: 0.2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 15.998 | 0.19 | 500 | 8.5537 | | 7.8818 | 0.39 | 1000 | 7.3646 | | 7.2781 | 0.58 | 1500 | 7.1307 | | 7.1073 | 0.77 | 2000 | 7.0462 | | 7.0749 | 0.97 | 2500 | 7.0667 | | 7.0373 | 1.16 | 3000 | 6.9511 | | 6.9767 | 1.36 | 3500 | 6.8339 | | 6.9483 | 1.55 | 4000 | 6.7795 | | 6.9071 | 1.74 | 4500 | 6.7828 | | 6.8591 | 1.94 | 5000 | 6.7164 | | 6.8595 | 2.13 | 5500 | 6.7705 | | 6.8406 | 2.32 | 6000 | 6.6906 | | 6.7861 | 2.52 | 6500 | 6.6878 | | 6.8103 | 2.71 | 7000 | 6.6486 | | 6.7724 | 2.9 | 7500 | 6.6703 | | 6.7563 | 3.1 | 8000 | 6.6626 | | 6.7567 | 3.29 | 8500 | 6.6603 | | 6.7315 | 3.49 | 9000 | 6.6392 | | 6.7443 | 3.68 | 9500 | 6.6306 | | 6.7244 | 3.87 | 10000 | 6.6456 | | 6.7464 | 4.07 | 10500 | 6.6224 | | 6.7008 | 4.26 | 11000 | 6.6138 | | 6.7076 | 4.45 | 11500 | 6.6783 | | 6.6944 | 4.65 | 12000 | 6.6147 | | 6.6993 | 4.84 | 12500 | 6.6466 | | 6.6893 | 5.03 | 13000 | 6.6369 | | 6.6905 | 5.23 | 13500 | 6.6293 | | 6.6899 | 5.42 | 14000 | 6.6271 | | 6.6835 | 5.62 | 14500 | 6.6566 | | 6.6746 | 5.81 | 15000 | 6.6385 | | 6.68 | 6.0 | 15500 | 6.6309 | | 6.6776 | 6.2 | 16000 | 6.6069 | | 6.6714 | 6.39 | 16500 | 6.5991 | | 6.6766 | 6.58 | 17000 | 6.6180 | | 6.6591 | 6.78 | 17500 | 6.6212 | | 6.6396 | 6.97 | 18000 | 6.5804 | | 6.6575 | 7.16 | 18500 | 6.6096 | | 6.6506 | 7.36 | 19000 | 6.5579 | | 6.6618 | 7.55 | 19500 | 6.5911 | | 6.6581 | 7.75 | 20000 | 6.5870 | | 6.6703 | 7.94 | 20500 | 6.6062 | | 6.6392 | 8.13 | 21000 | 6.5962 | | 6.6343 | 8.33 | 21500 | 6.5903 | | 6.6426 | 8.52 | 22000 | 6.6010 | | 6.6227 | 8.71 | 22500 | 6.6060 | | 6.6392 | 8.91 | 23000 | 6.5935 | | 6.6198 | 9.1 | 23500 | 6.6293 | | 6.6372 | 9.3 | 24000 | 6.5594 | | 6.6146 | 9.49 | 24500 | 6.5917 | | 6.6119 | 9.68 | 25000 | 6.5694 | | 6.6292 | 9.88 | 25500 | 6.6230 | | 6.634 | 10.07 | 26000 | 6.5857 | | 6.5863 | 10.26 | 26500 | 6.5938 | | 6.5957 | 10.46 | 27000 | 6.6256 | | 6.5928 | 10.65 | 27500 | 6.6111 | | 6.5948 | 10.84 | 28000 | 6.6031 | | 6.6131 | 11.04 | 28500 | 6.5582 | | 6.5946 | 11.23 | 29000 | 6.6093 | | 6.6155 | 11.43 | 29500 | 6.5670 | | 6.6051 | 11.62 | 30000 | 6.6016 | | 6.5917 | 11.81 | 30500 | 6.6045 | | 6.5918 | 12.01 | 31000 | 6.5802 | | 6.558 | 12.2 | 31500 | 6.5195 | | 6.5896 | 12.39 | 32000 | 6.6315 | | 6.5662 | 12.59 | 32500 | 6.6112 | | 6.5702 | 12.78 | 33000 | 6.5779 | | 6.5798 | 12.97 | 33500 | 6.5662 | | 6.5963 | 13.17 | 34000 | 6.5776 | | 6.5733 | 13.36 | 34500 | 6.5870 | | 6.5499 | 13.56 | 35000 | 6.5850 | | 6.5492 | 13.75 | 35500 | 6.5957 | | 6.5466 | 13.94 | 36000 | 6.5812 | | 6.5741 | 14.14 | 36500 | 6.5287 | | 6.5612 | 14.33 | 37000 | 6.5611 | | 6.5648 | 14.52 | 37500 | 6.5381 | | 6.5661 | 14.72 | 38000 | 6.5742 | | 6.5564 | 14.91 | 38500 | 6.5424 | | 6.5423 | 15.1 | 39000 | 6.5987 | | 6.5471 | 15.3 | 39500 | 6.5662 | | 6.5559 | 15.49 | 40000 | 6.5290 | | 6.5332 | 15.69 | 40500 | 6.5412 | | 6.5362 | 15.88 | 41000 | 6.5486 | | 6.5351 | 16.07 | 41500 | 6.5959 | | 6.5337 | 16.27 | 42000 | 6.5405 | | 6.5246 | 16.46 | 42500 | 6.5217 | | 6.4999 | 16.65 | 43000 | 6.5443 | | 6.5459 | 16.85 | 43500 | 6.5424 | | 6.5077 | 17.04 | 44000 | 6.5499 | | 6.5069 | 17.23 | 44500 | 6.5509 | | 6.5189 | 17.43 | 45000 | 6.5310 | | 6.5086 | 17.62 | 45500 | 6.5361 | | 6.5182 | 17.82 | 46000 | 6.5320 | | 6.51 | 18.01 | 46500 | 6.4850 | | 6.4868 | 18.2 | 47000 | 6.5155 | | 6.4665 | 18.4 | 47500 | 6.5305 | | 6.5123 | 18.59 | 48000 | 6.5301 | | 6.4981 | 18.78 | 48500 | 6.4617 | | 6.4606 | 18.98 | 49000 | 6.4895 | | 6.4716 | 19.17 | 49500 | 6.4790 | | 6.4733 | 19.36 | 50000 | 6.4818 | | 6.4935 | 19.56 | 50500 | 6.4518 | | 6.4761 | 19.75 | 51000 | 6.4852 | | 6.4651 | 19.95 | 51500 | 6.4836 | | 6.4462 | 20.14 | 52000 | 6.4792 | | 6.4605 | 20.33 | 52500 | 6.4661 | | 6.4718 | 20.53 | 53000 | 6.4639 | | 6.459 | 20.72 | 53500 | 6.4683 | | 6.4407 | 20.91 | 54000 | 6.4663 | | 6.4388 | 21.11 | 54500 | 6.4832 | | 6.4479 | 21.3 | 55000 | 6.4606 | | 6.4583 | 21.49 | 55500 | 6.4723 | | 6.4169 | 21.69 | 56000 | 6.4897 | | 6.4437 | 21.88 | 56500 | 6.4368 | | 6.4566 | 22.08 | 57000 | 6.4491 | | 6.4248 | 22.27 | 57500 | 6.4630 | | 6.431 | 22.46 | 58000 | 6.4246 | | 6.4274 | 22.66 | 58500 | 6.4618 | | 6.4262 | 22.85 | 59000 | 6.4177 | | 6.4328 | 23.04 | 59500 | 6.4243 | | 6.4305 | 23.24 | 60000 | 6.4178 | | 6.4078 | 23.43 | 60500 | 6.4310 | | 6.4431 | 23.63 | 61000 | 6.4338 | | 6.4066 | 23.82 | 61500 | 6.4080 | | 6.417 | 24.01 | 62000 | 6.4236 | | 6.4008 | 24.21 | 62500 | 6.3703 | | 6.4222 | 24.4 | 63000 | 6.4188 | | 6.4304 | 24.59 | 63500 | 6.3924 | | 6.4063 | 24.79 | 64000 | 6.4140 | | 6.4176 | 24.98 | 64500 | 6.4419 | | 6.4203 | 25.17 | 65000 | 6.4250 | | 6.3983 | 25.37 | 65500 | 6.3602 | | 6.3911 | 25.56 | 66000 | 6.4129 | | 6.3821 | 25.76 | 66500 | 6.4225 | | 6.3864 | 25.95 | 67000 | 6.3801 | | 6.4109 | 26.14 | 67500 | 6.4032 | | 6.4136 | 26.34 | 68000 | 6.3870 | | 6.3714 | 26.53 | 68500 | 6.4385 | | 6.3711 | 26.72 | 69000 | 6.4081 | | 6.391 | 26.92 | 69500 | 6.3901 | | 6.3931 | 27.11 | 70000 | 6.4047 | | 6.3842 | 27.3 | 70500 | 6.3830 | | 6.3798 | 27.5 | 71000 | 6.3935 | | 6.3903 | 27.69 | 71500 | 6.3756 | | 6.3771 | 27.89 | 72000 | 6.3554 | | 6.3763 | 28.08 | 72500 | 6.3911 | | 6.3576 | 28.27 | 73000 | 6.4059 | | 6.3581 | 28.47 | 73500 | 6.3976 | | 6.3739 | 28.66 | 74000 | 6.3921 | | 6.363 | 28.85 | 74500 | 6.3590 | | 6.3687 | 29.05 | 75000 | 6.3683 | | 6.3788 | 29.24 | 75500 | 6.3915 | | 6.3505 | 29.43 | 76000 | 6.3826 | | 6.3618 | 29.63 | 76500 | 6.3833 | | 6.3287 | 29.82 | 77000 | 6.4055 | | 6.3589 | 30.02 | 77500 | 6.3994 | | 6.3614 | 30.21 | 78000 | 6.3848 | | 6.3729 | 30.4 | 78500 | 6.3550 | | 6.3687 | 30.6 | 79000 | 6.3683 | | 6.3377 | 30.79 | 79500 | 6.3743 | | 6.3188 | 30.98 | 80000 | 6.3113 | | 6.3613 | 31.18 | 80500 | 6.3852 | | 6.3428 | 31.37 | 81000 | 6.3610 | | 6.3541 | 31.56 | 81500 | 6.3848 | | 6.3821 | 31.76 | 82000 | 6.3706 | | 6.3357 | 31.95 | 82500 | 6.3191 | | 6.3408 | 32.15 | 83000 | 6.3357 | | 6.3301 | 32.34 | 83500 | 6.3374 | | 6.3681 | 32.53 | 84000 | 6.3583 | | 6.324 | 32.73 | 84500 | 6.3472 | | 6.3615 | 32.92 | 85000 | 6.3359 | | 6.3382 | 33.11 | 85500 | 6.3664 | | 6.34 | 33.31 | 86000 | 6.3281 | | 6.3504 | 33.5 | 86500 | 6.3688 | | 6.3393 | 33.69 | 87000 | 6.3553 | | 6.3453 | 33.89 | 87500 | 6.3493 | | 6.3293 | 34.08 | 88000 | 6.3315 | | 6.3346 | 34.28 | 88500 | 6.3134 | | 6.3325 | 34.47 | 89000 | 6.3631 | | 6.3497 | 34.66 | 89500 | 6.3380 | | 6.332 | 34.86 | 90000 | 6.3484 | | 6.3224 | 35.05 | 90500 | 6.3602 | | 6.3242 | 35.24 | 91000 | 6.3414 | | 6.3346 | 35.44 | 91500 | 6.3151 | | 6.3547 | 35.63 | 92000 | 6.3499 | | 6.3243 | 35.82 | 92500 | 6.3173 | | 6.3148 | 36.02 | 93000 | 6.3141 | | 6.3202 | 36.21 | 93500 | 6.3358 | | 6.3251 | 36.41 | 94000 | 6.2946 | | 6.3313 | 36.6 | 94500 | 6.3413 | | 6.3077 | 36.79 | 95000 | 6.2959 | | 6.3173 | 36.99 | 95500 | 6.3220 | | 6.3207 | 37.18 | 96000 | 6.3630 | | 6.311 | 37.37 | 96500 | 6.3802 | | 6.3259 | 37.57 | 97000 | 6.3425 | | 6.3269 | 37.76 | 97500 | 6.3407 | | 6.3136 | 37.96 | 98000 | 6.3140 | | 6.3007 | 38.15 | 98500 | 6.3392 | | 6.2911 | 38.34 | 99000 | 6.3874 | | 6.3241 | 38.54 | 99500 | 6.3363 | | 6.3056 | 38.73 | 100000 | 6.3766 | | 6.3138 | 38.92 | 100500 | 6.3147 | | 6.3065 | 39.12 | 101000 | 6.3622 | | 6.3118 | 39.31 | 101500 | 6.3200 | | 6.3009 | 39.5 | 102000 | 6.3316 | | 6.3107 | 39.7 | 102500 | 6.3112 | | 6.2977 | 39.89 | 103000 | 6.3120 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
dfa8a3776bc463f89b0e044b9c33e4e9
Pawaret717/distilbert-base-uncased-finetuned-imdb
Pawaret717
distilbert
9
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.12.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
6c25f7657d65d2ec4db11171aaf74463
zuzhe/Chinese-wedding
zuzhe
null
9
0
null
15
null
false
false
false
openrail
null
null
null
1
1
0
0
1
1
0
[]
false
true
true
1,694
false
Chinese weddings need low cfg, such as 3.5-7. Because the training set only has a head portrait, it can only be stable, Forgive me for not doing well, Suggested fusion model Love Chinese style, thank QQ friends for their long-term help and teaching, thank you again Thanks for teacher screw's training set Note It is recommended to use cute face and beautiful face to stabilize the face Negative add long neck,Use vae with high saturation BY昂扬 ![00004-2447141747-8k Wallpaper,grand,(((masterpiece))), (((best quality))), ((ultra-detailed)), (illustration), (detailed light),solo,(doukou),_(w.png](https://s3.amazonaws.com/moonup/production/uploads/1675437913416-635e14681453686fae2cee93.png) ![00010-2823428029-masterpiece, best quality,1girl, solo, earrings, jewelry, flower, black_hair, hair_ornament, hair_flower, long_sleeves, long_hai.png](https://s3.amazonaws.com/moonup/production/uploads/1675437914990-635e14681453686fae2cee93.png) ![00194-3922598814-masterpiece, best quality,beautiful face,a girl,a woman with long hair wearing a red dress and earrings with a red background an.png](https://s3.amazonaws.com/moonup/production/uploads/1675438187452-635e14681453686fae2cee93.png) ![00170-592331038-masterpiece, best quality,beautiful face,a girl,a woman with long hair wearing a red dress and earrings with a red background an.png](https://s3.amazonaws.com/moonup/production/uploads/1675438184736-635e14681453686fae2cee93.png) ![00024-3200162253-doukou,chinese style architecture,Chinese style,lake,ancient town,beautiful and meticulous water,lotus,egret,raining,(Fishing bo.png](https://s3.amazonaws.com/moonup/production/uploads/1675438185934-635e14681453686fae2cee93.png)
59d53c1ab4855b43f19f4795c9616c6b
Helsinki-NLP/opus-tatoeba-fr-it
Helsinki-NLP
marian
12
68
transformers
0
translation
true
true
false
apache-2.0
['fr', 'it']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,152
false
### fr-it * source group: French * target group: Italian * OPUS readme: [fra-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md) * model: transformer-align * source language(s): fra * target language(s): ita * raw source language(s): fra * raw target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opusTCv20210807-2021-11-11.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip) * test set translations: [opusTCv20210807-2021-11-11.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt) * test set scores: [opusTCv20210807-2021-11-11.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.eval.txt) ## Benchmarks | testset | BLEU | chr-F | #sent | #words | BP | |---------|-------|-------|-------|--------|----| | Tatoeba-test-v2021-08-07.fra-ita | 54.8 | 0.737 | 10000 | 61517 | 0.953 | ### System Info: - hf_name: fr-it - source_languages: fra - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'it'] - src_constituents: ('French', {'fra'}) - tgt_constituents: ('Italian', {'ita'}) - src_multilingual: False - tgt_multilingual: False - long_pair: fra-ita - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt - src_alpha3: fra - tgt_alpha3: ita - chrF2_score: 0.737 - bleu: 54.8 - src_name: French - tgt_name: Italian - train_date: 2021-11-11 00:00:00 - src_alpha2: fr - tgt_alpha2: it - prefer_old: False - short_pair: fr-it - helsinki_git_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18 - transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e - port_machine: LM0-400-22516.local - port_time: 2021-11-11-19:40
b74933fd86915e0a2e1796d9fbb3ebf1
P0intMaN/PyAutoCode
P0intMaN
gpt2
11
2
transformers
0
text-generation
true
true
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,357
false
# PyAutoCode: GPT-2 based Python auto-code. PyAutoCode is a cut-down python autosuggestion built on **GPT-2** *(motivation: GPyT)* model. This baby model *(trained only up to 3 epochs)* is not **"fine-tuned"** yet therefore, I highly recommend not to use it in a production environment or incorporate PyAutoCode in any of your projects. It has been trained on **112GB** of Python data sourced from the best crowdsource platform ever -- **GitHub**. *NOTE: Increased training and fine tuning would be highly appreciated and I firmly believe that it would improve the ability of PyAutoCode significantly.* ## Some Model Features - Built on *GPT-2* - Tokenized with *ByteLevelBPETokenizer* - Data Sourced from *GitHub (almost 5 consecutive days of latest Python repositories)* - Makes use of *GPTLMHeadModel* and *DataCollatorForLanguageModelling* for training - Newline characters are custom coded as `<N>` ## Get a Glimpse of the Model You can make use of the **Inference API** of huggingface *(present on the right sidebar)* to load the model and check the result. Just enter any code snippet as input. Something like: ```sh for i in range( ``` ## Usage You can use my model too!. Here's a quick tour of how you can achieve this: Install transformers ```sh $ pip install transformers ``` Call the API and get it to work! ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("P0intMaN/PyAutoCode") model = AutoModelForCausalLM.from_pretrained("P0intMaN/PyAutoCode") # input: single line or multi-line. Highly recommended to use doc-strings. inp = """import pandas""" format_inp = inp.replace('\n', "<N>") tokenize_inp = tokenizer.encode(format_inp, return_tensors='pt') result = model.generate(tokenize_inp) decode_result = tokenizer.decode(result[0]) format_result = decode_result.replace('<N>', "\n") # printing the result print(format_result) ``` Upon successful execution, the above should probably produce *(your results may vary when this model is fine-tuned)* ```sh import pandas as pd import numpy as np import matplotlib.pyplot as plt ``` ## Credits ##### *Developed as a part of a university project by [Pratheek U](https://www.github.com/P0intMaN) and [Sourav Singh](https://github.com/Sourav11902312lpu)*
2911b94f1d08af66b0241fed41a23adc
Taoseef/XLM-roberta-finetuned
Taoseef
xlm-roberta
8
1
transformers
0
text-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
818
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # XLM-roberta-finetuned This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.21.1 - TensorFlow 2.8.2 - Tokenizers 0.12.1
af1bc529dcfa8ddb14aaaa6e344c9f97
eduardopds/distilbert-base-uncased-tweets
eduardopds
distilbert
8
3
transformers
1
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,698
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # eduardopds/distilbert-base-uncased-tweets This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7428 - Validation Loss: 0.9322 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 310, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0162 | 1.0010 | 0 | | 0.9552 | 0.9574 | 1 | | 0.8928 | 0.9393 | 2 | | 0.8238 | 0.9412 | 3 | | 0.7581 | 0.9322 | 4 | | 0.7268 | 0.9322 | 5 | | 0.7310 | 0.9322 | 6 | | 0.7390 | 0.9322 | 7 | | 0.7423 | 0.9322 | 8 | | 0.7428 | 0.9322 | 9 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.0 - Datasets 2.2.2 - Tokenizers 0.12.1
7993d5d143477bf67b3be85111ad9b76
cindy203cc/finetuning-sentiment-model-3000-samples
cindy203cc
distilbert
13
11
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3187 - Accuracy: 0.8633 - F1: 0.8629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.0 - Tokenizers 0.12.1
a317c6aa34aac5da3a0235ffbb5c894a
muhtasham/small-mlm-glue-cola
muhtasham
bert
12
0
transformers
1
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,442
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-cola This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0589 | 0.47 | 500 | 2.8255 | | 2.8708 | 0.94 | 1000 | 2.8047 | | 2.7086 | 1.4 | 1500 | 2.6590 | | 2.6021 | 1.87 | 2000 | 2.7510 | | 2.4549 | 2.34 | 2500 | 2.8776 | | 2.4864 | 2.81 | 3000 | nan | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
967233f3738f8de61c15c4d433afd470
MayaGalvez/bert-base-multilingual-cased-finetuned-pos
MayaGalvez
bert
10
36
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,945
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-pos This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1736 - Precision: 0.9499 - Recall: 0.9504 - F1: 0.9501 - Accuracy: 0.9551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7663 | 0.27 | 200 | 0.2047 | 0.9318 | 0.9312 | 0.9315 | 0.9388 | | 0.5539 | 0.53 | 400 | 0.1815 | 0.9381 | 0.9404 | 0.9392 | 0.9460 | | 0.5222 | 0.8 | 600 | 0.1787 | 0.9400 | 0.9424 | 0.9412 | 0.9468 | | 0.5084 | 1.07 | 800 | 0.1591 | 0.9470 | 0.9463 | 0.9467 | 0.9519 | | 0.4703 | 1.33 | 1000 | 0.1622 | 0.9456 | 0.9458 | 0.9457 | 0.9510 | | 0.5005 | 1.6 | 1200 | 0.1666 | 0.9470 | 0.9464 | 0.9467 | 0.9519 | | 0.4677 | 1.87 | 1400 | 0.1583 | 0.9483 | 0.9483 | 0.9483 | 0.9532 | | 0.4704 | 2.13 | 1600 | 0.1635 | 0.9472 | 0.9475 | 0.9473 | 0.9528 | | 0.4639 | 2.4 | 1800 | 0.1569 | 0.9475 | 0.9488 | 0.9482 | 0.9536 | | 0.4627 | 2.67 | 2000 | 0.1605 | 0.9474 | 0.9478 | 0.9476 | 0.9527 | | 0.4608 | 2.93 | 2200 | 0.1535 | 0.9485 | 0.9495 | 0.9490 | 0.9538 | | 0.4306 | 3.2 | 2400 | 0.1646 | 0.9489 | 0.9487 | 0.9488 | 0.9536 | | 0.4583 | 3.47 | 2600 | 0.1642 | 0.9488 | 0.9495 | 0.9491 | 0.9539 | | 0.453 | 3.73 | 2800 | 0.1646 | 0.9498 | 0.9505 | 0.9501 | 0.9554 | | 0.4347 | 4.0 | 3000 | 0.1629 | 0.9494 | 0.9504 | 0.9499 | 0.9552 | | 0.4425 | 4.27 | 3200 | 0.1738 | 0.9495 | 0.9502 | 0.9498 | 0.9550 | | 0.4335 | 4.53 | 3400 | 0.1733 | 0.9499 | 0.9506 | 0.9503 | 0.9550 | | 0.4306 | 4.8 | 3600 | 0.1736 | 0.9499 | 0.9504 | 0.9501 | 0.9551 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
c3ecb6c5a3fa8847dbeac87553441455
WillHeld/roberta-base-mnli
WillHeld
roberta
15
92
transformers
0
text-classification
true
false
false
mit
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
32,521
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-mnli This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3617 - Accuracy: 0.8657 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 1.0993 | 0.02 | 500 | 1.0983 | 0.3321 | | 1.099 | 0.04 | 1000 | 1.0932 | 0.4276 | | 1.011 | 0.06 | 1500 | 0.8352 | 0.6732 | | 0.7551 | 0.08 | 2000 | 0.6018 | 0.7615 | | 0.6343 | 0.1 | 2500 | 0.5726 | 0.7813 | | 0.5884 | 0.12 | 3000 | 0.5349 | 0.7926 | | 0.5548 | 0.14 | 3500 | 0.4925 | 0.8078 | | 0.5244 | 0.16 | 4000 | 0.4806 | 0.8161 | | 0.5198 | 0.18 | 4500 | 0.4614 | 0.8257 | | 0.5168 | 0.2 | 5000 | 0.4713 | 0.8177 | | 0.5194 | 0.22 | 5500 | 0.4344 | 0.8323 | | 0.485 | 0.24 | 6000 | 0.4527 | 0.8316 | | 0.4909 | 0.26 | 6500 | 0.4377 | 0.8376 | | 0.49 | 0.29 | 7000 | 0.4649 | 0.8266 | | 0.4897 | 0.31 | 7500 | 0.4162 | 0.8413 | | 0.4672 | 0.33 | 8000 | 0.4163 | 0.8425 | | 0.4699 | 0.35 | 8500 | 0.4060 | 0.8451 | | 0.4729 | 0.37 | 9000 | 0.4412 | 0.8387 | | 0.4733 | 0.39 | 9500 | 0.4353 | 0.8401 | | 0.4699 | 0.41 | 10000 | 0.4060 | 0.8476 | | 0.4759 | 0.43 | 10500 | 0.4226 | 0.8358 | | 0.461 | 0.45 | 11000 | 0.4220 | 0.8423 | | 0.4608 | 0.47 | 11500 | 0.4404 | 0.8319 | | 0.462 | 0.49 | 12000 | 0.4280 | 0.8455 | | 0.4533 | 0.51 | 12500 | 0.4128 | 0.8468 | | 0.4691 | 0.53 | 13000 | 0.4155 | 0.8437 | | 0.4552 | 0.55 | 13500 | 0.4385 | 0.8348 | | 0.4573 | 0.57 | 14000 | 0.4498 | 0.8424 | | 0.4562 | 0.59 | 14500 | 0.4162 | 0.8442 | | 0.4665 | 0.61 | 15000 | 0.4417 | 0.8432 | | 0.4569 | 0.63 | 15500 | 0.4113 | 0.8492 | | 0.4705 | 0.65 | 16000 | 0.4454 | 0.8399 | | 0.4685 | 0.67 | 16500 | 0.4055 | 0.8451 | | 0.4475 | 0.69 | 17000 | 0.4426 | 0.8383 | | 0.4641 | 0.71 | 17500 | 0.4256 | 0.8471 | | 0.4299 | 0.73 | 18000 | 0.4260 | 0.8478 | | 0.4439 | 0.75 | 18500 | 0.4218 | 0.8454 | | 0.4628 | 0.77 | 19000 | 0.4087 | 0.8479 | | 0.4502 | 0.79 | 19500 | 0.4238 | 0.8450 | | 0.4299 | 0.81 | 20000 | 0.4091 | 0.8485 | | 0.4496 | 0.84 | 20500 | 0.4160 | 0.8439 | | 0.4492 | 0.86 | 21000 | 0.4109 | 0.8469 | | 0.432 | 0.88 | 21500 | 0.4499 | 0.8493 | | 0.4343 | 0.9 | 22000 | 0.4136 | 0.8465 | | 0.4445 | 0.92 | 22500 | 0.4095 | 0.8433 | | 0.4378 | 0.94 | 23000 | 0.3999 | 0.8483 | | 0.4367 | 0.96 | 23500 | 0.3962 | 0.8509 | | 0.4428 | 0.98 | 24000 | 0.3958 | 0.8504 | | 0.4356 | 1.0 | 24500 | 0.3998 | 0.8558 | | 0.3715 | 1.02 | 25000 | 0.4016 | 0.8589 | | 0.3649 | 1.04 | 25500 | 0.4368 | 0.8582 | | 0.3565 | 1.06 | 26000 | 0.4084 | 0.8519 | | 0.3626 | 1.08 | 26500 | 0.4302 | 0.8438 | | 0.3535 | 1.1 | 27000 | 0.4206 | 0.8557 | | 0.3684 | 1.12 | 27500 | 0.4117 | 0.8561 | | 0.3649 | 1.14 | 28000 | 0.4300 | 0.8527 | | 0.3791 | 1.16 | 28500 | 0.3916 | 0.8585 | | 0.366 | 1.18 | 29000 | 0.4101 | 0.8592 | | 0.3777 | 1.2 | 29500 | 0.3946 | 0.8561 | | 0.3672 | 1.22 | 30000 | 0.4417 | 0.8530 | | 0.3688 | 1.24 | 30500 | 0.4066 | 0.8523 | | 0.3525 | 1.26 | 31000 | 0.4299 | 0.8581 | | 0.3688 | 1.28 | 31500 | 0.3870 | 0.8553 | | 0.3699 | 1.3 | 32000 | 0.3781 | 0.8627 | | 0.3547 | 1.32 | 32500 | 0.4311 | 0.8526 | | 0.3653 | 1.34 | 33000 | 0.4034 | 0.8603 | | 0.3738 | 1.36 | 33500 | 0.4103 | 0.8554 | | 0.3824 | 1.39 | 34000 | 0.3719 | 0.8618 | | 0.3591 | 1.41 | 34500 | 0.4244 | 0.8615 | | 0.3697 | 1.43 | 35000 | 0.4689 | 0.8451 | | 0.3598 | 1.45 | 35500 | 0.4149 | 0.8532 | | 0.3586 | 1.47 | 36000 | 0.4070 | 0.8591 | | 0.3519 | 1.49 | 36500 | 0.4133 | 0.8545 | | 0.3681 | 1.51 | 37000 | 0.3889 | 0.8601 | | 0.3611 | 1.53 | 37500 | 0.3934 | 0.8591 | | 0.3696 | 1.55 | 38000 | 0.4313 | 0.8552 | | 0.3798 | 1.57 | 38500 | 0.3784 | 0.8602 | | 0.3601 | 1.59 | 39000 | 0.3994 | 0.8600 | | 0.3696 | 1.61 | 39500 | 0.4206 | 0.8577 | | 0.368 | 1.63 | 40000 | 0.3903 | 0.8627 | | 0.3473 | 1.65 | 40500 | 0.3813 | 0.8655 | | 0.3604 | 1.67 | 41000 | 0.3930 | 0.8551 | | 0.3741 | 1.69 | 41500 | 0.3644 | 0.8618 | | 0.3551 | 1.71 | 42000 | 0.3936 | 0.8583 | | 0.378 | 1.73 | 42500 | 0.3826 | 0.8607 | | 0.3609 | 1.75 | 43000 | 0.3815 | 0.8618 | | 0.3678 | 1.77 | 43500 | 0.3961 | 0.8578 | | 0.3633 | 1.79 | 44000 | 0.4011 | 0.8603 | | 0.3792 | 1.81 | 44500 | 0.4061 | 0.8592 | | 0.3675 | 1.83 | 45000 | 0.4155 | 0.8631 | | 0.3576 | 1.85 | 45500 | 0.4061 | 0.8589 | | 0.3546 | 1.87 | 46000 | 0.3862 | 0.8623 | | 0.3564 | 1.89 | 46500 | 0.3937 | 0.8607 | | 0.3602 | 1.91 | 47000 | 0.3851 | 0.8646 | | 0.3494 | 1.94 | 47500 | 0.4015 | 0.8541 | | 0.3499 | 1.96 | 48000 | 0.4266 | 0.8545 | | 0.3672 | 1.98 | 48500 | 0.3761 | 0.8588 | | 0.3661 | 2.0 | 49000 | 0.4121 | 0.8567 | | 0.2759 | 2.02 | 49500 | 0.4653 | 0.8645 | | 0.2927 | 2.04 | 50000 | 0.4652 | 0.8597 | | 0.2736 | 2.06 | 50500 | 0.4547 | 0.8597 | | 0.2749 | 2.08 | 51000 | 0.4896 | 0.8565 | | 0.2757 | 2.1 | 51500 | 0.4814 | 0.8639 | | 0.2833 | 2.12 | 52000 | 0.4110 | 0.8656 | | 0.2797 | 2.14 | 52500 | 0.4316 | 0.8636 | | 0.2643 | 2.16 | 53000 | 0.4317 | 0.8599 | | 0.2791 | 2.18 | 53500 | 0.4557 | 0.8617 | | 0.2737 | 2.2 | 54000 | 0.4102 | 0.8624 | | 0.2748 | 2.22 | 54500 | 0.4187 | 0.8585 | | 0.2619 | 2.24 | 55000 | 0.4412 | 0.8590 | | 0.2718 | 2.26 | 55500 | 0.4707 | 0.8618 | | 0.2662 | 2.28 | 56000 | 0.4754 | 0.8594 | | 0.282 | 2.3 | 56500 | 0.4376 | 0.8617 | | 0.284 | 2.32 | 57000 | 0.4393 | 0.8599 | | 0.2733 | 2.34 | 57500 | 0.4531 | 0.8581 | | 0.2878 | 2.36 | 58000 | 0.4727 | 0.8549 | | 0.2812 | 2.38 | 58500 | 0.4221 | 0.8625 | | 0.2657 | 2.4 | 59000 | 0.4456 | 0.8583 | | 0.2716 | 2.42 | 59500 | 0.4455 | 0.8668 | | 0.2766 | 2.44 | 60000 | 0.4940 | 0.8580 | | 0.2871 | 2.46 | 60500 | 0.4460 | 0.8501 | | 0.2731 | 2.49 | 61000 | 0.4600 | 0.8631 | | 0.2885 | 2.51 | 61500 | 0.4229 | 0.8645 | | 0.2764 | 2.53 | 62000 | 0.4107 | 0.8638 | | 0.2866 | 2.55 | 62500 | 0.4250 | 0.8638 | | 0.2754 | 2.57 | 63000 | 0.4846 | 0.8580 | | 0.3028 | 2.59 | 63500 | 0.4339 | 0.8627 | | 0.2828 | 2.61 | 64000 | 0.4697 | 0.8613 | | 0.2875 | 2.63 | 64500 | 0.4167 | 0.8638 | | 0.2836 | 2.65 | 65000 | 0.5050 | 0.8600 | | 0.2978 | 2.67 | 65500 | 0.4139 | 0.8628 | | 0.2946 | 2.69 | 66000 | 0.4449 | 0.8644 | | 0.2822 | 2.71 | 66500 | 0.4302 | 0.8612 | | 0.3006 | 2.73 | 67000 | 0.4256 | 0.8631 | | 0.2896 | 2.75 | 67500 | 0.4993 | 0.8603 | | 0.2787 | 2.77 | 68000 | 0.4467 | 0.8636 | | 0.3 | 2.79 | 68500 | 0.4196 | 0.8592 | | 0.2939 | 2.81 | 69000 | 0.4234 | 0.8614 | | 0.2841 | 2.83 | 69500 | 0.4173 | 0.8660 | | 0.2935 | 2.85 | 70000 | 0.4054 | 0.8658 | | 0.2977 | 2.87 | 70500 | 0.4400 | 0.8623 | | 0.2853 | 2.89 | 71000 | 0.4322 | 0.8668 | | 0.2779 | 2.91 | 71500 | 0.4460 | 0.8595 | | 0.2923 | 2.93 | 72000 | 0.4279 | 0.8619 | | 0.2915 | 2.95 | 72500 | 0.4324 | 0.8625 | | 0.2927 | 2.97 | 73000 | 0.4108 | 0.8672 | | 0.29 | 2.99 | 73500 | 0.4299 | 0.8579 | | 0.2255 | 3.01 | 74000 | 0.5337 | 0.8637 | | 0.2113 | 3.04 | 74500 | 0.5046 | 0.8624 | | 0.207 | 3.06 | 75000 | 0.6011 | 0.8551 | | 0.2226 | 3.08 | 75500 | 0.5426 | 0.8579 | | 0.2129 | 3.1 | 76000 | 0.5036 | 0.8640 | | 0.2201 | 3.12 | 76500 | 0.5629 | 0.8604 | | 0.2185 | 3.14 | 77000 | 0.5416 | 0.8607 | | 0.21 | 3.16 | 77500 | 0.5457 | 0.8605 | | 0.2372 | 3.18 | 78000 | 0.5337 | 0.8594 | | 0.2237 | 3.2 | 78500 | 0.5060 | 0.8679 | | 0.2277 | 3.22 | 79000 | 0.5647 | 0.8651 | | 0.2301 | 3.24 | 79500 | 0.4906 | 0.8602 | | 0.2238 | 3.26 | 80000 | 0.5231 | 0.8647 | | 0.2365 | 3.28 | 80500 | 0.5628 | 0.8621 | | 0.2189 | 3.3 | 81000 | 0.5496 | 0.8630 | | 0.2233 | 3.32 | 81500 | 0.5418 | 0.8639 | | 0.2216 | 3.34 | 82000 | 0.5032 | 0.8689 | | 0.2314 | 3.36 | 82500 | 0.5437 | 0.8634 | | 0.2351 | 3.38 | 83000 | 0.4863 | 0.8653 | | 0.2378 | 3.4 | 83500 | 0.5158 | 0.8635 | | 0.2357 | 3.42 | 84000 | 0.5142 | 0.8629 | | 0.2484 | 3.44 | 84500 | 0.4536 | 0.8657 | | 0.2261 | 3.46 | 85000 | 0.5619 | 0.8649 | | 0.2323 | 3.48 | 85500 | 0.5371 | 0.8587 | | 0.2336 | 3.5 | 86000 | 0.5562 | 0.8621 | | 0.2259 | 3.52 | 86500 | 0.5339 | 0.8589 | | 0.2371 | 3.54 | 87000 | 0.4711 | 0.8665 | | 0.227 | 3.57 | 87500 | 0.5350 | 0.8644 | | 0.2417 | 3.59 | 88000 | 0.4692 | 0.8665 | | 0.2176 | 3.61 | 88500 | 0.5195 | 0.8655 | | 0.2393 | 3.63 | 89000 | 0.5468 | 0.8588 | | 0.2219 | 3.65 | 89500 | 0.5498 | 0.8646 | | 0.23 | 3.67 | 90000 | 0.5367 | 0.8703 | | 0.2317 | 3.69 | 90500 | 0.4761 | 0.8639 | | 0.2241 | 3.71 | 91000 | 0.4992 | 0.8654 | | 0.2327 | 3.73 | 91500 | 0.5040 | 0.8678 | | 0.2312 | 3.75 | 92000 | 0.4943 | 0.8639 | | 0.2369 | 3.77 | 92500 | 0.4824 | 0.8721 | | 0.2235 | 3.79 | 93000 | 0.5090 | 0.8661 | | 0.2256 | 3.81 | 93500 | 0.5258 | 0.8644 | | 0.236 | 3.83 | 94000 | 0.5490 | 0.8542 | | 0.2313 | 3.85 | 94500 | 0.4672 | 0.8677 | | 0.228 | 3.87 | 95000 | 0.5037 | 0.8623 | | 0.2297 | 3.89 | 95500 | 0.5207 | 0.8545 | | 0.2332 | 3.91 | 96000 | 0.5139 | 0.8698 | | 0.2331 | 3.93 | 96500 | 0.5182 | 0.8615 | | 0.2354 | 3.95 | 97000 | 0.5090 | 0.8657 | | 0.2273 | 3.97 | 97500 | 0.5523 | 0.8637 | | 0.2433 | 3.99 | 98000 | 0.5148 | 0.8691 | | 0.191 | 4.01 | 98500 | 0.6007 | 0.8654 | | 0.1683 | 4.03 | 99000 | 0.6770 | 0.8636 | | 0.1778 | 4.05 | 99500 | 0.6595 | 0.8635 | | 0.1832 | 4.07 | 100000 | 0.6129 | 0.8608 | | 0.1842 | 4.09 | 100500 | 0.6612 | 0.8611 | | 0.1865 | 4.12 | 101000 | 0.6551 | 0.8658 | | 0.1833 | 4.14 | 101500 | 0.6294 | 0.8643 | | 0.1869 | 4.16 | 102000 | 0.6234 | 0.8614 | | 0.1806 | 4.18 | 102500 | 0.6417 | 0.8655 | | 0.1911 | 4.2 | 103000 | 0.6426 | 0.8607 | | 0.1981 | 4.22 | 103500 | 0.6247 | 0.8589 | | 0.1731 | 4.24 | 104000 | 0.6613 | 0.8626 | | 0.1977 | 4.26 | 104500 | 0.5441 | 0.8661 | | 0.1771 | 4.28 | 105000 | 0.6608 | 0.8644 | | 0.1903 | 4.3 | 105500 | 0.6174 | 0.8603 | | 0.1797 | 4.32 | 106000 | 0.6609 | 0.8607 | | 0.188 | 4.34 | 106500 | 0.6059 | 0.8643 | | 0.1863 | 4.36 | 107000 | 0.5723 | 0.8663 | | 0.19 | 4.38 | 107500 | 0.5959 | 0.8652 | | 0.1869 | 4.4 | 108000 | 0.5898 | 0.8698 | | 0.1909 | 4.42 | 108500 | 0.6052 | 0.8659 | | 0.1908 | 4.44 | 109000 | 0.5854 | 0.8690 | | 0.203 | 4.46 | 109500 | 0.5727 | 0.8694 | | 0.1993 | 4.48 | 110000 | 0.5877 | 0.8653 | | 0.1796 | 4.5 | 110500 | 0.6231 | 0.8679 | | 0.1837 | 4.52 | 111000 | 0.5749 | 0.8694 | | 0.1885 | 4.54 | 111500 | 0.6174 | 0.8618 | | 0.1902 | 4.56 | 112000 | 0.5625 | 0.8682 | | 0.2031 | 4.58 | 112500 | 0.6252 | 0.8577 | | 0.1986 | 4.6 | 113000 | 0.6147 | 0.8548 | | 0.1769 | 4.62 | 113500 | 0.6351 | 0.8648 | | 0.1974 | 4.64 | 114000 | 0.6396 | 0.8630 | | 0.1952 | 4.67 | 114500 | 0.6174 | 0.8661 | | 0.1904 | 4.69 | 115000 | 0.6188 | 0.8663 | | 0.191 | 4.71 | 115500 | 0.5860 | 0.8646 | | 0.1869 | 4.73 | 116000 | 0.5978 | 0.8586 | | 0.2056 | 4.75 | 116500 | 0.5985 | 0.8648 | | 0.1837 | 4.77 | 117000 | 0.5742 | 0.8636 | | 0.2038 | 4.79 | 117500 | 0.5726 | 0.8662 | | 0.1939 | 4.81 | 118000 | 0.6097 | 0.8623 | | 0.1869 | 4.83 | 118500 | 0.5820 | 0.8651 | | 0.1897 | 4.85 | 119000 | 0.5766 | 0.8666 | | 0.1792 | 4.87 | 119500 | 0.6093 | 0.8683 | | 0.2056 | 4.89 | 120000 | 0.5890 | 0.8633 | | 0.1989 | 4.91 | 120500 | 0.5825 | 0.8674 | | 0.1916 | 4.93 | 121000 | 0.6250 | 0.8641 | | 0.197 | 4.95 | 121500 | 0.5848 | 0.8645 | | 0.1923 | 4.97 | 122000 | 0.5666 | 0.8667 | | 0.1916 | 4.99 | 122500 | 0.6189 | 0.8638 | | 0.1642 | 5.01 | 123000 | 0.7094 | 0.8610 | | 0.1357 | 5.03 | 123500 | 0.6972 | 0.8658 | | 0.1476 | 5.05 | 124000 | 0.6965 | 0.8664 | | 0.1476 | 5.07 | 124500 | 0.7177 | 0.8638 | | 0.1486 | 5.09 | 125000 | 0.6945 | 0.8620 | | 0.1309 | 5.11 | 125500 | 0.7326 | 0.8626 | | 0.1575 | 5.13 | 126000 | 0.6473 | 0.8632 | | 0.1411 | 5.15 | 126500 | 0.6955 | 0.8651 | | 0.1473 | 5.17 | 127000 | 0.6926 | 0.8648 | | 0.153 | 5.19 | 127500 | 0.7010 | 0.8638 | | 0.1488 | 5.22 | 128000 | 0.6643 | 0.8689 | | 0.144 | 5.24 | 128500 | 0.6868 | 0.8668 | | 0.156 | 5.26 | 129000 | 0.6682 | 0.8645 | | 0.1537 | 5.28 | 129500 | 0.6740 | 0.8610 | | 0.1424 | 5.3 | 130000 | 0.7509 | 0.8603 | | 0.1531 | 5.32 | 130500 | 0.6966 | 0.8670 | | 0.1457 | 5.34 | 131000 | 0.7227 | 0.8632 | | 0.1494 | 5.36 | 131500 | 0.6911 | 0.8626 | | 0.1476 | 5.38 | 132000 | 0.6903 | 0.8630 | | 0.1531 | 5.4 | 132500 | 0.6839 | 0.8675 | | 0.1613 | 5.42 | 133000 | 0.6559 | 0.8601 | | 0.1456 | 5.44 | 133500 | 0.7161 | 0.8619 | | 0.1539 | 5.46 | 134000 | 0.7108 | 0.8638 | | 0.1685 | 5.48 | 134500 | 0.6703 | 0.8628 | | 0.1482 | 5.5 | 135000 | 0.6692 | 0.8651 | | 0.1587 | 5.52 | 135500 | 0.6936 | 0.8658 | | 0.152 | 5.54 | 136000 | 0.6844 | 0.8661 | | 0.1619 | 5.56 | 136500 | 0.6632 | 0.8641 | | 0.154 | 5.58 | 137000 | 0.6451 | 0.8666 | | 0.1525 | 5.6 | 137500 | 0.6529 | 0.8686 | | 0.1545 | 5.62 | 138000 | 0.6860 | 0.8603 | | 0.1487 | 5.64 | 138500 | 0.6842 | 0.8668 | | 0.1546 | 5.66 | 139000 | 0.6692 | 0.8655 | | 0.168 | 5.68 | 139500 | 0.6701 | 0.8649 | | 0.1513 | 5.7 | 140000 | 0.6613 | 0.8680 | | 0.1704 | 5.72 | 140500 | 0.6804 | 0.8643 | | 0.1517 | 5.74 | 141000 | 0.6871 | 0.8684 | | 0.1572 | 5.77 | 141500 | 0.6676 | 0.8670 | | 0.1551 | 5.79 | 142000 | 0.6919 | 0.8638 | | 0.1483 | 5.81 | 142500 | 0.6801 | 0.8667 | | 0.1562 | 5.83 | 143000 | 0.6791 | 0.8628 | | 0.1594 | 5.85 | 143500 | 0.6422 | 0.8671 | | 0.1627 | 5.87 | 144000 | 0.6526 | 0.8679 | | 0.1514 | 5.89 | 144500 | 0.6734 | 0.8698 | | 0.1546 | 5.91 | 145000 | 0.6377 | 0.8711 | | 0.146 | 5.93 | 145500 | 0.7214 | 0.8657 | | 0.1608 | 5.95 | 146000 | 0.6756 | 0.8674 | | 0.1648 | 5.97 | 146500 | 0.6387 | 0.8687 | | 0.1547 | 5.99 | 147000 | 0.6871 | 0.8646 | | 0.1304 | 6.01 | 147500 | 0.7543 | 0.8633 | | 0.1059 | 6.03 | 148000 | 0.7576 | 0.8638 | | 0.1089 | 6.05 | 148500 | 0.7530 | 0.8642 | | 0.112 | 6.07 | 149000 | 0.7951 | 0.8640 | | 0.1198 | 6.09 | 149500 | 0.7381 | 0.8636 | | 0.1222 | 6.11 | 150000 | 0.7560 | 0.8623 | | 0.1024 | 6.13 | 150500 | 0.7965 | 0.8669 | | 0.125 | 6.15 | 151000 | 0.7613 | 0.8620 | | 0.1005 | 6.17 | 151500 | 0.7851 | 0.8651 | | 0.1196 | 6.19 | 152000 | 0.7637 | 0.8652 | | 0.1133 | 6.21 | 152500 | 0.7810 | 0.8660 | | 0.1271 | 6.23 | 153000 | 0.7510 | 0.8672 | | 0.1167 | 6.25 | 153500 | 0.7670 | 0.8638 | | 0.1198 | 6.27 | 154000 | 0.7770 | 0.8632 | | 0.1194 | 6.29 | 154500 | 0.7720 | 0.8607 | | 0.1215 | 6.32 | 155000 | 0.7880 | 0.8609 | | 0.1134 | 6.34 | 155500 | 0.8026 | 0.8617 | | 0.1113 | 6.36 | 156000 | 0.7632 | 0.8652 | | 0.1207 | 6.38 | 156500 | 0.7369 | 0.8686 | | 0.1188 | 6.4 | 157000 | 0.7466 | 0.8657 | | 0.1283 | 6.42 | 157500 | 0.7531 | 0.8645 | | 0.1186 | 6.44 | 158000 | 0.7529 | 0.8673 | | 0.135 | 6.46 | 158500 | 0.7706 | 0.8589 | | 0.1116 | 6.48 | 159000 | 0.7754 | 0.8646 | | 0.1295 | 6.5 | 159500 | 0.7026 | 0.8693 | | 0.1309 | 6.52 | 160000 | 0.7342 | 0.8656 | | 0.1172 | 6.54 | 160500 | 0.7828 | 0.8644 | | 0.125 | 6.56 | 161000 | 0.7456 | 0.8671 | | 0.1199 | 6.58 | 161500 | 0.7464 | 0.8701 | | 0.1197 | 6.6 | 162000 | 0.7626 | 0.8639 | | 0.1126 | 6.62 | 162500 | 0.8115 | 0.8609 | | 0.1365 | 6.64 | 163000 | 0.7407 | 0.8681 | | 0.122 | 6.66 | 163500 | 0.7648 | 0.8641 | | 0.1157 | 6.68 | 164000 | 0.7636 | 0.8669 | | 0.118 | 6.7 | 164500 | 0.7688 | 0.8686 | | 0.1173 | 6.72 | 165000 | 0.8051 | 0.8687 | | 0.1137 | 6.74 | 165500 | 0.8101 | 0.8635 | | 0.1412 | 6.76 | 166000 | 0.7004 | 0.8689 | | 0.1131 | 6.78 | 166500 | 0.7589 | 0.8664 | | 0.1232 | 6.8 | 167000 | 0.7657 | 0.8654 | | 0.1343 | 6.82 | 167500 | 0.7547 | 0.8652 | | 0.1208 | 6.84 | 168000 | 0.7407 | 0.8699 | | 0.1284 | 6.87 | 168500 | 0.7182 | 0.8677 | | 0.1182 | 6.89 | 169000 | 0.7248 | 0.8681 | | 0.1166 | 6.91 | 169500 | 0.7385 | 0.8678 | | 0.1289 | 6.93 | 170000 | 0.7293 | 0.8672 | | 0.1243 | 6.95 | 170500 | 0.7178 | 0.8696 | | 0.1256 | 6.97 | 171000 | 0.7291 | 0.8633 | | 0.1162 | 6.99 | 171500 | 0.7515 | 0.8648 | | 0.1013 | 7.01 | 172000 | 0.7824 | 0.8655 | | 0.0811 | 7.03 | 172500 | 0.8297 | 0.8647 | | 0.0831 | 7.05 | 173000 | 0.8144 | 0.8678 | | 0.0872 | 7.07 | 173500 | 0.8176 | 0.8679 | | 0.0868 | 7.09 | 174000 | 0.8405 | 0.8642 | | 0.0756 | 7.11 | 174500 | 0.8867 | 0.8642 | | 0.0882 | 7.13 | 175000 | 0.8185 | 0.8659 | | 0.0879 | 7.15 | 175500 | 0.8653 | 0.8625 | | 0.0831 | 7.17 | 176000 | 0.8323 | 0.8655 | | 0.0847 | 7.19 | 176500 | 0.8358 | 0.8650 | | 0.0938 | 7.21 | 177000 | 0.7967 | 0.8665 | | 0.0908 | 7.23 | 177500 | 0.8147 | 0.8640 | | 0.0809 | 7.25 | 178000 | 0.8325 | 0.8679 | | 0.0993 | 7.27 | 178500 | 0.8131 | 0.8655 | | 0.087 | 7.29 | 179000 | 0.8249 | 0.8628 | | 0.0873 | 7.31 | 179500 | 0.8326 | 0.8661 | | 0.0889 | 7.33 | 180000 | 0.8171 | 0.8685 | | 0.0739 | 7.35 | 180500 | 0.8686 | 0.8642 | | 0.0821 | 7.37 | 181000 | 0.8739 | 0.8669 | | 0.0981 | 7.39 | 181500 | 0.8558 | 0.8639 | | 0.0858 | 7.42 | 182000 | 0.8276 | 0.8673 | | 0.083 | 7.44 | 182500 | 0.8148 | 0.8675 | | 0.0969 | 7.46 | 183000 | 0.8520 | 0.8630 | | 0.0851 | 7.48 | 183500 | 0.8604 | 0.8671 | | 0.0881 | 7.5 | 184000 | 0.8665 | 0.8634 | | 0.1036 | 7.52 | 184500 | 0.8233 | 0.8642 | | 0.0874 | 7.54 | 185000 | 0.8293 | 0.8660 | | 0.0935 | 7.56 | 185500 | 0.8006 | 0.8671 | | 0.0887 | 7.58 | 186000 | 0.8352 | 0.8637 | | 0.0897 | 7.6 | 186500 | 0.8309 | 0.8655 | | 0.0788 | 7.62 | 187000 | 0.8505 | 0.8653 | | 0.0887 | 7.64 | 187500 | 0.8465 | 0.8657 | | 0.0909 | 7.66 | 188000 | 0.8582 | 0.8637 | | 0.0895 | 7.68 | 188500 | 0.8487 | 0.8659 | | 0.0729 | 7.7 | 189000 | 0.8770 | 0.8636 | | 0.0758 | 7.72 | 189500 | 0.8717 | 0.8653 | | 0.0901 | 7.74 | 190000 | 0.8513 | 0.8639 | | 0.0848 | 7.76 | 190500 | 0.8554 | 0.8661 | | 0.0985 | 7.78 | 191000 | 0.8259 | 0.8640 | | 0.091 | 7.8 | 191500 | 0.8483 | 0.8644 | | 0.0868 | 7.82 | 192000 | 0.8776 | 0.8602 | | 0.0898 | 7.84 | 192500 | 0.8470 | 0.8634 | | 0.0959 | 7.86 | 193000 | 0.8344 | 0.8645 | | 0.0939 | 7.88 | 193500 | 0.8419 | 0.8641 | | 0.0769 | 7.9 | 194000 | 0.8355 | 0.8673 | | 0.0808 | 7.92 | 194500 | 0.8642 | 0.8646 | | 0.0797 | 7.94 | 195000 | 0.8401 | 0.8663 | | 0.0875 | 7.97 | 195500 | 0.8598 | 0.8638 | | 0.0896 | 7.99 | 196000 | 0.8624 | 0.8648 | | 0.0762 | 8.01 | 196500 | 0.8645 | 0.8656 | | 0.0552 | 8.03 | 197000 | 0.8844 | 0.8661 | | 0.0598 | 8.05 | 197500 | 0.8870 | 0.8663 | | 0.0528 | 8.07 | 198000 | 0.8866 | 0.8679 | | 0.0679 | 8.09 | 198500 | 0.8835 | 0.8657 | | 0.0628 | 8.11 | 199000 | 0.9017 | 0.8635 | | 0.0644 | 8.13 | 199500 | 0.8979 | 0.8647 | | 0.0446 | 8.15 | 200000 | 0.9144 | 0.8656 | | 0.0524 | 8.17 | 200500 | 0.9116 | 0.8651 | | 0.0561 | 8.19 | 201000 | 0.9281 | 0.8639 | | 0.0525 | 8.21 | 201500 | 0.9115 | 0.8672 | | 0.0646 | 8.23 | 202000 | 0.8933 | 0.8663 | | 0.0691 | 8.25 | 202500 | 0.8591 | 0.8662 | | 0.0708 | 8.27 | 203000 | 0.8525 | 0.8683 | | 0.0598 | 8.29 | 203500 | 0.8663 | 0.8689 | | 0.0513 | 8.31 | 204000 | 0.8671 | 0.8704 | | 0.0564 | 8.33 | 204500 | 0.8597 | 0.8694 | | 0.0619 | 8.35 | 205000 | 0.8645 | 0.8683 | | 0.0563 | 8.37 | 205500 | 0.8848 | 0.8658 | | 0.0615 | 8.39 | 206000 | 0.8728 | 0.8663 | | 0.0668 | 8.41 | 206500 | 0.8925 | 0.8657 | | 0.0592 | 8.43 | 207000 | 0.8644 | 0.8673 | | 0.0668 | 8.45 | 207500 | 0.8601 | 0.8700 | | 0.071 | 8.47 | 208000 | 0.8735 | 0.8682 | | 0.061 | 8.49 | 208500 | 0.8797 | 0.8662 | | 0.0627 | 8.52 | 209000 | 0.8742 | 0.8663 | | 0.0505 | 8.54 | 209500 | 0.9063 | 0.8649 | | 0.0607 | 8.56 | 210000 | 0.8940 | 0.8677 | | 0.0569 | 8.58 | 210500 | 0.8953 | 0.8673 | | 0.0671 | 8.6 | 211000 | 0.8784 | 0.8667 | | 0.0509 | 8.62 | 211500 | 0.8942 | 0.8678 | | 0.0526 | 8.64 | 212000 | 0.8968 | 0.8686 | | 0.0541 | 8.66 | 212500 | 0.8950 | 0.8694 | | 0.0677 | 8.68 | 213000 | 0.8808 | 0.8665 | | 0.0552 | 8.7 | 213500 | 0.8923 | 0.8662 | | 0.053 | 8.72 | 214000 | 0.9118 | 0.8673 | | 0.0608 | 8.74 | 214500 | 0.9023 | 0.8700 | | 0.0573 | 8.76 | 215000 | 0.9096 | 0.8681 | | 0.0621 | 8.78 | 215500 | 0.8872 | 0.8684 | | 0.0559 | 8.8 | 216000 | 0.8837 | 0.8672 | | 0.0593 | 8.82 | 216500 | 0.8937 | 0.8675 | | 0.0633 | 8.84 | 217000 | 0.8746 | 0.8685 | | 0.0548 | 8.86 | 217500 | 0.9049 | 0.8662 | | 0.0427 | 8.88 | 218000 | 0.9195 | 0.8685 | | 0.0623 | 8.9 | 218500 | 0.9146 | 0.8669 | | 0.0594 | 8.92 | 219000 | 0.9096 | 0.8672 | | 0.0683 | 8.94 | 219500 | 0.8778 | 0.8679 | | 0.0659 | 8.96 | 220000 | 0.8552 | 0.8699 | | 0.0603 | 8.98 | 220500 | 0.8901 | 0.8679 | | 0.0566 | 9.0 | 221000 | 0.8997 | 0.8677 | | 0.0443 | 9.02 | 221500 | 0.9009 | 0.8683 | | 0.0358 | 9.04 | 222000 | 0.9193 | 0.8680 | | 0.0317 | 9.07 | 222500 | 0.9319 | 0.8687 | | 0.0384 | 9.09 | 223000 | 0.9155 | 0.8699 | | 0.0432 | 9.11 | 223500 | 0.9243 | 0.8685 | | 0.0408 | 9.13 | 224000 | 0.9251 | 0.8693 | | 0.0443 | 9.15 | 224500 | 0.9322 | 0.8677 | | 0.0438 | 9.17 | 225000 | 0.9371 | 0.8666 | | 0.0379 | 9.19 | 225500 | 0.9283 | 0.8693 | | 0.0411 | 9.21 | 226000 | 0.9147 | 0.8703 | | 0.036 | 9.23 | 226500 | 0.9167 | 0.8703 | | 0.0394 | 9.25 | 227000 | 0.9254 | 0.8688 | | 0.0363 | 9.27 | 227500 | 0.9288 | 0.8704 | | 0.0492 | 9.29 | 228000 | 0.9242 | 0.8693 | | 0.0411 | 9.31 | 228500 | 0.9325 | 0.8677 | | 0.0408 | 9.33 | 229000 | 0.9370 | 0.8690 | | 0.0326 | 9.35 | 229500 | 0.9417 | 0.8705 | | 0.038 | 9.37 | 230000 | 0.9480 | 0.8700 | | 0.0412 | 9.39 | 230500 | 0.9398 | 0.8693 | | 0.0588 | 9.41 | 231000 | 0.9174 | 0.8707 | | 0.0417 | 9.43 | 231500 | 0.9204 | 0.8715 | | 0.0362 | 9.45 | 232000 | 0.9319 | 0.8701 | | 0.0283 | 9.47 | 232500 | 0.9562 | 0.8696 | | 0.0353 | 9.49 | 233000 | 0.9525 | 0.8690 | | 0.0384 | 9.51 | 233500 | 0.9561 | 0.8687 | | 0.0406 | 9.53 | 234000 | 0.9375 | 0.8715 | | 0.0356 | 9.55 | 234500 | 0.9575 | 0.8690 | | 0.044 | 9.57 | 235000 | 0.9429 | 0.8708 | | 0.0444 | 9.6 | 235500 | 0.9413 | 0.8690 | | 0.0421 | 9.62 | 236000 | 0.9412 | 0.8689 | | 0.038 | 9.64 | 236500 | 0.9352 | 0.8695 | | 0.0355 | 9.66 | 237000 | 0.9362 | 0.8689 | | 0.04 | 9.68 | 237500 | 0.9403 | 0.8691 | | 0.0356 | 9.7 | 238000 | 0.9402 | 0.8706 | | 0.0383 | 9.72 | 238500 | 0.9466 | 0.8692 | | 0.0534 | 9.74 | 239000 | 0.9378 | 0.8700 | | 0.0383 | 9.76 | 239500 | 0.9390 | 0.8697 | | 0.0418 | 9.78 | 240000 | 0.9404 | 0.8694 | | 0.0335 | 9.8 | 240500 | 0.9390 | 0.8705 | | 0.0398 | 9.82 | 241000 | 0.9430 | 0.8696 | | 0.0336 | 9.84 | 241500 | 0.9438 | 0.8698 | | 0.045 | 9.86 | 242000 | 0.9414 | 0.8703 | | 0.0401 | 9.88 | 242500 | 0.9425 | 0.8696 | | 0.0454 | 9.9 | 243000 | 0.9405 | 0.8696 | | 0.0361 | 9.92 | 243500 | 0.9394 | 0.8696 | | 0.0458 | 9.94 | 244000 | 0.9400 | 0.8690 | | 0.0329 | 9.96 | 244500 | 0.9402 | 0.8693 | | 0.0469 | 9.98 | 245000 | 0.9401 | 0.8691 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.7.1 - Datasets 1.18.3 - Tokenizers 0.11.6
1b10a9056fcc4256d9ecb0b40f2e6a39
tommy19970714/noda-model
tommy19970714
null
56
5
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image']
false
true
true
4,235
false
### noda model Dreambooth model trained by tommy19970714 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: noda (use that on your prompt) ![noda 0](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%281%29.jpg)![noda 1](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%282%29.jpg)![noda 2](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%283%29.jpg)![noda 3](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%284%29.jpg)![noda 4](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%285%29.jpg)![noda 5](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%286%29.jpg)![noda 6](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%287%29.jpg)![noda 7](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%288%29.jpg)![noda 8](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%289%29.jpg)![noda 9](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2810%29.jpg)![noda 10](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2811%29.jpg)![noda 11](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2812%29.jpg)![noda 12](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2813%29.jpg)![noda 13](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2814%29.jpg)![noda 14](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2815%29.jpg)![noda 15](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2816%29.jpg)![noda 16](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2817%29.jpg)![noda 17](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2818%29.jpg)![noda 18](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2819%29.jpg)![noda 19](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2820%29.jpg)![noda 20](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2821%29.jpg)![noda 21](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2822%29.jpg)![noda 22](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2823%29.jpg)![noda 23](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2824%29.jpg)![noda 24](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2825%29.jpg)![noda 25](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2826%29.jpg)![noda 26](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2827%29.jpg)![noda 27](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2828%29.jpg)![noda 28](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2829%29.jpg)![noda 29](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2830%29.jpg)![noda 30](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2831%29.jpg)![noda 31](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2832%29.jpg)![noda 32](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2833%29.jpg)![noda 33](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2834%29.jpg)![noda 34](https://huggingface.co/tommy19970714/noda-model/resolve/main/concept_images/noda_%2835%29.jpg)
89a40b1db339d31b7ee86d20e56ba93e
rohitp1/wav2vec2-base-timit-finetune
rohitp1
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,619
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-finetune This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 972.3115 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3325.1297 | 1.39 | 100 | 4054.7283 | 1.0 | | 1624.4673 | 2.77 | 200 | 1100.8928 | 1.0 | | 1079.3557 | 4.17 | 300 | 1009.5025 | 1.0 | | 1026.4995 | 5.55 | 400 | 979.0 | 1.0 | | 1005.6487 | 6.94 | 500 | 964.3292 | 1.0 | | 1000.4138 | 8.33 | 600 | 972.3115 | 1.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.7.0 - Tokenizers 0.11.0
b7202f5be4c3a7d5bd5a338bada150b1
polixonrio/whisper-small-fy-NL-Transfer-From-English
polixonrio
whisper
23
0
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['fy']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,590
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Western Frisian (Netherlands) This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fy-NL dataset. It achieves the following results on the evaluation set: - Loss: 0.5703 - Wer: 21.8466 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0078 | 10.01 | 1000 | 0.5184 | 23.0973 | | 0.0009 | 21.0 | 2000 | 0.5653 | 22.5434 | | 0.0007 | 31.01 | 3000 | 0.5703 | 21.8466 | | 0.0004 | 42.0 | 4000 | 0.5968 | 21.9574 | | 0.0003 | 52.01 | 5000 | 0.6044 | 22.0360 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
969b4e86a8c5a49ec2998af66db27a39
EgilKarlsen/ApacheBertBaseCase
EgilKarlsen
bert
9
10
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,246
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ApacheBertBaseCase This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.2938 | 1.0 | 20881 | 0.2663 | | 0.2345 | 2.0 | 41762 | 0.2134 | | 0.2182 | 3.0 | 62643 | 0.2008 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
d3c52edb89754b91acf62c6381267d31
Helsinki-NLP/opus-mt-en-afa
Helsinki-NLP
marian
11
13
transformers
0
translation
true
true
false
apache-2.0
['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,860
false
### eng-afa * source group: English * target group: Afro-Asiatic languages * OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md) * model: transformer * source language(s): eng * target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 | | Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 | | Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 | | Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 | | Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 | | Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 | | Tatoeba-test.eng.multi | 14.4 | 0.375 | | Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 | | Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 | | Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 | | Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 | ### System Info: - hf_name: eng-afa - source_languages: eng - target_languages: afa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa'] - src_constituents: {'eng'} - tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: afa - short_pair: en-afa - chrF2_score: 0.375 - bleu: 14.4 - brevity_penalty: 1.0 - ref_len: 58110.0 - src_name: English - tgt_name: Afro-Asiatic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: afa - prefer_old: False - long_pair: eng-afa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
609c1839872bcb81fabf059e9dbba4e6
DrishtiSharma/whisper-large-v2-vietnamese
DrishtiSharma
whisper
15
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['vi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,317
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large Vietnamese - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3681 - Wer: 16.6594 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 600 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0667 | 1.73 | 600 | 0.3681 | 16.6594 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
9049cedb55e33ebf5f995db3c3bf97f4
cm-mueller/BACnet-Klassifizierung-Gewerke
cm-mueller
bert
12
2
transformers
0
text-classification
true
false
false
mit
['de']
null
null
0
0
0
0
0
0
0
['generated_from_trainer', 'BACnet']
true
true
true
2,576
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BACnet-Klassifizierung-Gewerke-bert-base-german-cased This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the [gart-labor](https://huggingface.co/gart-labor) "klassifizierung_gewerke" dataset. It achieves the following results on the evaluation set: - Loss: 0.0394 - F1: [0.96296296 0.8 0.97297297 1. 0.99469027 0.98979592 0.98969072] ## Model description This model makes it possible to classify the components of the technical building equipment described with the BACnet standard into different trades. The model is based on a German-language data set. ## Intended uses & limitations The model divides descriptive texts into the following building services trades: Waste_water_water_gas_systems, Other_systems, Building_automation, Refrigeration_systems, Air_technical_systems, Heavy_current_systems and Heat_supply_systems ## Training and evaluation data The model is based on a German-language data set. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------:| | 0.4309 | 0.99 | 45 | 0.0736 | [0.89655172 0.84210526 0.97297297 0.98901099 0.9929078 0.99492386 0.98701299] | | 0.0722 | 1.99 | 90 | 0.0511 | [0.92307692 0.875 0.96 1. 0.99295775 0.98979592 0.98714653] | | 0.0431 | 2.99 | 135 | 0.0460 | [1. 0.8 0.97297297 1. 0.99469027 0.98979592 0.99224806] | | 0.0313 | 3.99 | 180 | 0.0365 | [1. 0.84210526 0.97297297 1. 0.99646643 0.98979592 0.99224806] | | 0.0238 | 4.99 | 225 | 0.0394 | [0.96296296 0.8 0.97297297 1. 0.99469027 0.98979592 0.98969072] | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
25342620b1524b33e8682c9dd009e007
MazenAmria/swin-base-finetuned-cifar100
MazenAmria
swin
12
82
transformers
0
image-classification
true
false
false
apache-2.0
null
['cifar100']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,594
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-finetuned-cifar100 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the cifar100 dataset. It achieves the following results on the evaluation set: - Accuracy: 0.9201 - Loss: 0.3670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.3536 | 1.0 | 781 | 0.9052 | 0.3141 | | 0.3254 | 2.0 | 1562 | 0.9117 | 0.2991 | | 0.0936 | 3.0 | 2343 | 0.9138 | 0.3322 | | 0.1054 | 4.0 | 3124 | 0.9158 | 0.3483 | | 0.0269 | 5.0 | 3905 | 0.9201 | 0.3670 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
3119a653459dcf01de8963f2fae697f7
Hate-speech-CNERG/deoffxlmr-mono-malyalam
Hate-speech-CNERG
xlm-roberta
7
1
transformers
0
text-classification
true
false
false
apache-2.0
['ml']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,470
false
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
d26b1781d838d35d618d0614994e10ae
DeividasM/whisper-medium-lt
DeividasM
whisper
17
37
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['lt']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,079
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Lithuanian CV11 This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 lt dataset. It achieves the following results on the evaluation set: - Loss: 0.354951 - Wer: 20.446244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0056 | 9.42 | 1000 | 0.3252 | 20.5534 | | 0.0023 | 18.8 | 2000 | 0.3549 | 20.4462 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
30f37fa4070cefd9f59fa620cd9aec43
ilos-vigil/bigbird-small-indonesian-nli
ilos-vigil
big_bird
8
6
transformers
0
zero-shot-classification
true
false
false
mit
['id']
['indonli', 'MoritzLaurer/multilingual-NLI-26lang-2mil7']
null
1
0
1
0
0
0
0
[]
true
true
true
8,029
false
# Indonesian small BigBird model NLI ## Source Code Source code to create this model and perform benchmark is available at [https://github.com/ilos-vigil/bigbird-small-indonesian](https://github.com/ilos-vigil/bigbird-small-indonesian). ## Model Description This model is based on [bigbird-small-indonesian](https://huggingface.co/ilos-vigil/bigbird-small-indonesian) and was finetuned on 2 datasets. It is intended to be used for zero-shot text classification. ## How to use > Inference for ZSC (Zero Shot Classification) task ```py >>> pipe = pipeline( ... task='zero-shot-classification', ... model='./tmp/checkpoint-28832' ... ) >>> pipe( ... sequences='Fakta nomor 7 akan membuat ada terkejut', ... candidate_labels=['clickbait', 'bukan clickbait'], ... hypothesis_template='Judul video ini {}.', ... multi_label=False ... ) { 'sequence': 'Fakta nomor 7 akan membuat ada terkejut', 'labels': ['clickbait', 'bukan clickbait'], 'scores': [0.6102734804153442, 0.38972654938697815] } >>> pipe( ... sequences='Samsung tuntut balik Apple dengan alasan hak paten teknologi.', ... candidate_labels=['teknologi', 'olahraga', 'bisnis', 'politik', 'kesehatan', 'kuliner'], ... hypothesis_template='Kategori berita ini adalah {}.', ... multi_label=True ... ) { 'sequence': 'Samsung tuntut balik Apple dengan alasan hak paten teknologi.', 'labels': ['politik', 'teknologi', 'kesehatan', 'bisnis', 'olahraga', 'kuliner'], 'scores': [0.7390161752700806, 0.6657379269599915, 0.4459509551525116, 0.38407933712005615, 0.3679264783859253, 0.14181996881961823] } ``` > Inference for NLI (Natural Language Inference) task ```py >>> pipe = pipeline( ... task='text-classification', ... model='./tmp/checkpoint-28832', ... return_all_scores=True ... ) >>> pipe({ ... 'text': 'Nasi adalah makanan pokok.', # Premise ... 'text_pair': 'Saya mau makan nasi goreng.' # Hypothesis ... }) [ {'label': 'entailment', 'score': 0.25495028495788574}, {'label': 'neutral', 'score': 0.40920916199684143}, {'label': 'contradiction', 'score': 0.33584052324295044} ] >>> pipe({ ... 'text': 'Python sering digunakan untuk web development dan AI research.', ... 'text_pair': 'AI research biasanya tidak menggunakan bahasa pemrograman Python.' ... }) [ {'label': 'entailment', 'score': 0.12508109211921692}, {'label': 'neutral', 'score': 0.22146646678447723}, {'label': 'contradiction', 'score': 0.653452455997467} ] ``` ## Limitation and bias This model inherit limitation/bias from it's parent model and 2 datasets used for fine-tuning. And just like most language model, this model is sensitive towards input change. Here's an example. ```py >>> from transformers import pipeline >>> pipe = pipeline( ... task='zero-shot-classification', ... model='./tmp/checkpoint-28832' ... ) >>> text = 'Resep sate ayam enak dan mudah.' >>> candidate_labels = ['kuliner', 'olahraga'] >>> pipe( ... sequences=text, ... candidate_labels=candidate_labels, ... hypothesis_template='Kategori judul artikel ini adalah {}.', ... multi_label=False ... ) { 'sequence': 'Resep sate ayam enak dan mudah.', 'labels': ['kuliner', 'olahraga'], 'scores': [0.7711364030838013, 0.22886358201503754] } >>> pipe( ... sequences=text, ... candidate_labels=candidate_labels, ... hypothesis_template='Kelas kalimat ini {}.', ... multi_label=False ... ) { 'sequence': 'Resep sate ayam enak dan mudah.', 'labels': ['kuliner', 'olahraga'], 'scores': [0.7043636441230774, 0.295636385679245] } >>> pipe( ... sequences=text, ... candidate_labels=candidate_labels, ... hypothesis_template='{}.', ... multi_label=False ... ) { 'sequence': 'Resep sate ayam enak dan mudah.', 'labels': ['kuliner', 'olahraga'], 'scores': [0.5986711382865906, 0.4013288915157318] } ``` ## Training, evaluation and testing data This model was finetuned with [IndoNLI](https://huggingface.co/datasets/indonli) and [multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7). Although `multilingual-NLI-26lang-2mil7` dataset is machine-translated, this dataset slightly improve result of NLI benchmark and extensively improve result of ZSC benchmark. Both evaluation and testing data is only based on IndoNLI dataset. ## Training Procedure The model was finetuned on single RTX 3060 with 16 epoch/28832 steps with accumulated batch size 64. AdamW optimizer is used with LR 1e-4, weight decay 0.05, learning rate warmup for first 6% steps (1730 steps) and linear decay of the learning rate afterwards. Take note while model weight on epoch 9 has lowest loss/highest accuracy, it has slightly lower performance on ZSC benchmark. Additional information can be seen on Tensorboard training logs. ## Benchmark as NLI model Both benchmark show result of 2 different model as additional comparison. Additional benchmark using IndoNLI dataset is available on it's paper [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://aclanthology.org/2021.emnlp-main.821/). | Model | bigbird-small-indonesian-nli | xlm-roberta-large-xnli | mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 | | ------------------------------------------ | ---------------------------- | ---------------------- | -------------------------------------------- | | Parameter | 30.6M | 559.9M | 278.8M | | Multilingual | | V | V | | Finetuned on IndoNLI | V | | V | | Finetuned on multilingual-NLI-26lang-2mil7 | V | | | | Test (Lay) | 0.6888 | 0.2226 | 0.8151 | | Test (Expert) | 0.5734 | 0.3505 | 0.7775 | ## Benchmark as ZSC model [Indonesian-Twitter-Emotion-Dataset](https://github.com/meisaputri21/Indonesian-Twitter-Emotion-Dataset/) is used to perform ZSC benchmark. This benchmark include 4 different parameter which affect performance of each model differently. Hypothesis template for this benchmark is `Kalimat ini mengekspresikan perasaan {}.` and `{}.`. Take note F1 score measurement only calculate label with highest probability. | Model | Multi-label | Use template | F1 Score | | -------------------------------------------- | ----------- | ------------ | ------------ | | bigbird-small-indonesian-nli | V | V | 0.3574 | | | V | | 0.3654 | | | | V | 0.3985 | | | | | _0.4160_ | | xlm-roberta-large-xnli | V | V | _**0.6292**_ | | | V | | 0.5596 | | | | V | 0.5737 | | | | | 0.5433 | | mDeBERTa-v3-base-xnli-multilingual-nli-2mil7 | V | V | 0.5324 | | | V | | _0.5499_ | | | | V | 0.5269 | | | | | 0.5228 |
f8936cfd5bdf2daa7e52325de50a2971
Chikashi/t5-small-finetuned-wikihow_3epoch_b4_lr3e-5
Chikashi
t5
11
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['wikihow']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,729
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikihow_3epoch_b4_lr3e-5 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.4351 - Rouge1: 26.1071 - Rouge2: 9.3627 - Rougel: 22.0825 - Rougelsum: 25.4514 - Gen Len: 18.474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.9216 | 0.13 | 5000 | 2.6385 | 23.8039 | 7.8863 | 20.0109 | 23.0802 | 18.3481 | | 2.8158 | 0.25 | 10000 | 2.5884 | 24.2567 | 8.2003 | 20.438 | 23.5325 | 18.3833 | | 2.7743 | 0.38 | 15000 | 2.5623 | 24.8471 | 8.3768 | 20.8711 | 24.1114 | 18.2901 | | 2.7598 | 0.51 | 20000 | 2.5368 | 25.1566 | 8.6721 | 21.1896 | 24.4558 | 18.3561 | | 2.7192 | 0.64 | 25000 | 2.5220 | 25.3477 | 8.8106 | 21.3799 | 24.6742 | 18.3108 | | 2.7207 | 0.76 | 30000 | 2.5114 | 25.5912 | 8.998 | 21.5508 | 24.9344 | 18.3445 | | 2.7041 | 0.89 | 35000 | 2.4993 | 25.457 | 8.8644 | 21.4516 | 24.7965 | 18.4354 | | 2.687 | 1.02 | 40000 | 2.4879 | 25.5886 | 8.9766 | 21.6794 | 24.9512 | 18.4035 | | 2.6652 | 1.14 | 45000 | 2.4848 | 25.7367 | 9.078 | 21.7096 | 25.0924 | 18.4328 | | 2.6536 | 1.27 | 50000 | 2.4761 | 25.7368 | 9.1609 | 21.729 | 25.0866 | 18.3117 | | 2.6589 | 1.4 | 55000 | 2.4702 | 25.7738 | 9.1413 | 21.7492 | 25.114 | 18.4862 | | 2.6384 | 1.53 | 60000 | 2.4620 | 25.7433 | 9.1356 | 21.8198 | 25.0896 | 18.489 | | 2.6337 | 1.65 | 65000 | 2.4595 | 26.0919 | 9.2605 | 21.9447 | 25.4065 | 18.4083 | | 2.6375 | 1.78 | 70000 | 2.4557 | 26.0912 | 9.3469 | 22.0182 | 25.4428 | 18.4133 | | 2.6441 | 1.91 | 75000 | 2.4502 | 26.1366 | 9.3143 | 22.058 | 25.4673 | 18.4972 | | 2.6276 | 2.03 | 80000 | 2.4478 | 25.9929 | 9.2464 | 21.9271 | 25.3263 | 18.469 | | 2.6062 | 2.16 | 85000 | 2.4467 | 26.0465 | 9.3166 | 22.0342 | 25.3998 | 18.3777 | | 2.6126 | 2.29 | 90000 | 2.4407 | 26.1953 | 9.3848 | 22.1148 | 25.5161 | 18.467 | | 2.6182 | 2.42 | 95000 | 2.4397 | 26.1331 | 9.3626 | 22.1076 | 25.4627 | 18.4413 | | 2.6041 | 2.54 | 100000 | 2.4375 | 26.1301 | 9.3567 | 22.0869 | 25.465 | 18.4929 | | 2.5996 | 2.67 | 105000 | 2.4367 | 26.0956 | 9.3314 | 22.063 | 25.4242 | 18.5074 | | 2.6144 | 2.8 | 110000 | 2.4355 | 26.1764 | 9.4157 | 22.1231 | 25.5175 | 18.4729 | | 2.608 | 2.93 | 115000 | 2.4351 | 26.1071 | 9.3627 | 22.0825 | 25.4514 | 18.474 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
ad951a921f0f9d49f3709fa44e98d2d9
anas-awadalla/bart-base-few-shot-k-16-finetuned-squad-seed-4
anas-awadalla
bart
16
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
990
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-few-shot-k-16-finetuned-squad-seed-4 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
c7060f56810ea70f6ffe6d8973d3dc88
wietsedv/xlm-roberta-base-ft-udpos28-ga
wietsedv
xlm-roberta
8
13
transformers
0
token-classification
true
false
false
apache-2.0
['ga']
['universal_dependencies']
null
0
0
0
0
0
0
0
['part-of-speech', 'token-classification']
true
true
true
565
false
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Irish This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ga") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ga") ```
d522132c1dd80e559c23257ceace7e1a
understaters/ddpm-butterflies-128
understaters
null
13
3
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,234
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/understaters/ddpm-butterflies-128/tensorboard?#scalars)
edf43cb52ef3437fe1fd99391a16f373
nandysoham/Wayback_Machine-clustered
nandysoham
distilbert
8
0
transformers
0
question-answering
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,868
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham/Wayback_Machine-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3070 - Train End Logits Accuracy: 0.9410 - Train Start Logits Accuracy: 0.8924 - Validation Loss: 0.4163 - Validation End Logits Accuracy: 0.6667 - Validation Start Logits Accuracy: 1.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3070 | 0.9410 | 0.8924 | 0.4163 | 0.6667 | 1.0 | 0 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
cff852777a865b4b6868ec9d05549c78
96harsh56/bert-large-cased-berta-finetuned-subjqa
96harsh56
bert
12
4
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
913
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-berta-finetuned-subjqa This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
7f7e474e51ec68cbf2dbda0aa90d506b
nguyenkhoa2407/favs_filter_classification_v2
nguyenkhoa2407
bert
10
28
transformers
0
text-classification
true
false
false
apache-2.0
null
['filter_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,829
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # favs_filter_classification_v2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the filter_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2016 - F1: 0.9762 - Roc Auc: 0.9844 - Accuracy: 0.9545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.6596 | 1.0 | 16 | 0.6086 | 0.2687 | 0.5474 | 0.0 | | 0.5448 | 2.0 | 32 | 0.5354 | 0.3824 | 0.6063 | 0.0 | | 0.5106 | 3.0 | 48 | 0.4874 | 0.4444 | 0.6382 | 0.0455 | | 0.4353 | 4.0 | 64 | 0.4301 | 0.5352 | 0.6889 | 0.1818 | | 0.3699 | 5.0 | 80 | 0.3890 | 0.6579 | 0.7640 | 0.3636 | | 0.349 | 6.0 | 96 | 0.3663 | 0.6667 | 0.7633 | 0.3182 | | 0.3104 | 7.0 | 112 | 0.3327 | 0.7105 | 0.7953 | 0.4545 | | 0.3023 | 8.0 | 128 | 0.2971 | 0.7733 | 0.8303 | 0.5455 | | 0.2676 | 9.0 | 144 | 0.2766 | 0.8395 | 0.8861 | 0.7727 | | 0.2374 | 10.0 | 160 | 0.2541 | 0.8537 | 0.8980 | 0.7727 | | 0.2238 | 11.0 | 176 | 0.2399 | 0.9024 | 0.9293 | 0.8182 | | 0.2084 | 12.0 | 192 | 0.2221 | 0.9286 | 0.9531 | 0.8636 | | 0.2143 | 13.0 | 208 | 0.2138 | 0.9286 | 0.9531 | 0.8636 | | 0.1846 | 14.0 | 224 | 0.2016 | 0.9762 | 0.9844 | 0.9545 | | 0.1812 | 15.0 | 240 | 0.1957 | 0.9762 | 0.9844 | 0.9545 | | 0.1756 | 16.0 | 256 | 0.1881 | 0.9647 | 0.9806 | 0.9091 | | 0.1662 | 17.0 | 272 | 0.1845 | 0.9762 | 0.9844 | 0.9545 | | 0.1715 | 18.0 | 288 | 0.1802 | 0.9762 | 0.9844 | 0.9545 | | 0.1585 | 19.0 | 304 | 0.1782 | 0.9762 | 0.9844 | 0.9545 | | 0.1595 | 20.0 | 320 | 0.1775 | 0.9762 | 0.9844 | 0.9545 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
fbf22f4e0d5338dc56bc2d01f64fc672
m3hrdadfi/hubert-base-persian-speech-gender-recognition
m3hrdadfi
hubert
7
21,614
transformers
2
null
true
false
false
apache-2.0
['fa']
['shemo']
null
0
0
0
0
0
0
0
['audio', 'speech', 'speech-gender-recognition']
false
true
true
2,578
false
# Emotion Recognition in Persian (fa) Speech using HuBERT ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ```bash !git clone https://github.com/m3hrdadfi/soxan.git . ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/hubert-base-persian-speech-gender-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Label": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/female.wav" outputs = predict(path, sampling_rate) ``` ```bash [{'Label': 'F', 'Score': '98.2%'}, {'Label': 'M', 'Score': '1.8%'}] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | Emotions | precision | recall | f1-score | accuracy | |----------|-----------|--------|----------|----------| | F | 0.98 | 0.97 | 0.98 | | | M | 0.98 | 0.99 | 0.98 | | | | | | Overal | 0.98 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
fd0246e88486cbd0288d5c44b8647421
EIStakovskii/german_toxicity_classifier_plus_v2
EIStakovskii
bert
8
104
transformers
0
text-classification
true
false
false
other
['de']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,122
false
## Description NB: this version of the model is the improved version of [EIStakovskii/german_toxicity_classifier_plus](https://huggingface.co/EIStakovskii/german_toxicity_classifier_plus). To see the source code of training and the data please follow [the github link](https://github.com/eistakovskii/NLP_projects/tree/main/TEXT_CLASSIFICATION). This model was trained for toxicity labeling. The model was fine-tuned based off [the dbmdz/bert-base-german-cased model](https://huggingface.co/dbmdz/bert-base-german-cased). To use the model: ```python from transformers import pipeline classifier = pipeline("text-classification", model = 'EIStakovskii/german_toxicity_classifier_plus_v2') print(classifier("Verpiss dich von hier")) ``` ## Metrics (at validation): epoch|step|eval_accuracy|eval_f1|eval_loss -|-|-|-|- 0.8|1200|0.9132176234979973|0.9113535629048755|0.24135465919971466 ## Comparison against Perspective This model was compared against the Google's [Perspective API](https://developers.perspectiveapi.com/s/?language=en_US) that similarly detects toxicity. Two models were tested on two datasets: the size of [200 sentences](https://github.com/eistakovskii/NLP_projects/blob/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR/test/test_de_200.csv) and [400 sentences](https://github.com/eistakovskii/NLP_projects/blob/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR/test/test_de_400.csv). The first one (arguably harder) was collected from the sentences of the [JigSaw](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/data) and [DeTox](https://github.com/hdaSprachtechnologie/detox) datasets. The second one (easier) was collected from the combination of sources: both from JigSaw and DeTox as well as [Paradetox](https://github.com/s-nlp/multilingual_detox/tree/main/data) translations and sentences extracted from [Reverso Context](https://context.reverso.net/translation/) by keywords. # german_toxicity_classifier_plus_v2 size|accuracy|f1 -|-|- 200|0.767|0.787 400|0.9650|0.9651 # Perspective size|accuracy|f1 -|-|- 200|0.834|0.820 400|0.892|0.885
0d870b3dfc8abfb84df80255ede1f8bd
mp6kv/feedback_intent_test
mp6kv
roberta
12
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,503
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # feedback_intent_test This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. ## Model description Custom data generated labeling text according to these three categories. - Positive : Encouraging the student that they are correct and on the right track - Neutral : Mixed feedback or feedback that asks for more information - Negative : Informing the student they need to change direction or that they are not correct Takes a user input of string text and classifies it according to one of three categories. ## Intended uses & limitations from transformers import pipeline classifier = pipeline("text-classification",model="mp6kv/feedback_intent_test") output = classifier("great job, you're getting it!") score = output[0]['score'] label = output[0]['label'] ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
cee8c9bda3674e147344081165baf2b1
theojolliffe/bart-large-cnn-finetuned-roundup-4-4
theojolliffe
bart
15
3
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,769
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-4-4 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7912 - Rouge1: 53.8175 - Rouge2: 35.1335 - Rougel: 38.0823 - Rougelsum: 51.2925 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 398 | 0.9455 | 52.8137 | 33.4924 | 35.5866 | 50.7208 | 142.0 | | 1.1309 | 2.0 | 796 | 0.8397 | 54.0923 | 35.0799 | 37.4609 | 51.5914 | 142.0 | | 0.6902 | 3.0 | 1194 | 0.7932 | 53.5752 | 35.0842 | 37.9295 | 51.0356 | 142.0 | | 0.4951 | 4.0 | 1592 | 0.7912 | 53.8175 | 35.1335 | 38.0823 | 51.2925 | 142.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
b48468d88ffddfcf9e10d7fd725ad845
mrp/marian-finetuned-kde4-en-to-fr
mrp
marian
14
5
transformers
0
translation
true
false
false
apache-2.0
null
['kde4']
null
0
0
0
0
0
0
0
['translation', 'generated_from_trainer']
true
true
true
1,076
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9643 - Bleu: 50.2041 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
6d986e4f0577e7714e61e0e9e11ae0d8
jinghan/deberta-base-finetuned-wnli
jinghan
deberta
14
1
transformers
0
text-classification
true
false
false
mit
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,462
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-wnli This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6926 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6926 | 0.5634 | | No log | 2.0 | 80 | 0.6911 | 0.5634 | | No log | 3.0 | 120 | 0.6903 | 0.5634 | | No log | 4.0 | 160 | 0.6905 | 0.5634 | | No log | 5.0 | 200 | 0.6904 | 0.5634 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
079957b4f8bd66ad4bf54276cc3f1301
farofang/t5-small-finetuned-thai-informal-to-formal
farofang
t5
14
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
23,118
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-thai-informal-to-formal This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3091 - Bleu: 20.5964 - Gen Len: 19.9981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 300 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:| | 2.2862 | 1.0 | 1011 | 2.2028 | 31.6678 | 20.0 | | 2.1228 | 2.0 | 2022 | 2.0339 | 32.3643 | 20.0 | | 2.0581 | 3.0 | 3033 | 1.9386 | 32.3784 | 20.0 | | 1.9714 | 4.0 | 4044 | 1.8899 | 31.9728 | 20.0 | | 1.9169 | 5.0 | 5055 | 1.8318 | 32.1064 | 20.0 | | 1.8969 | 6.0 | 6066 | 1.8005 | 31.4324 | 20.0 | | 1.8486 | 7.0 | 7077 | 1.7813 | 31.7758 | 20.0 | | 1.802 | 8.0 | 8088 | 1.7464 | 31.9055 | 20.0 | | 1.7654 | 9.0 | 9099 | 1.7352 | 31.9598 | 20.0 | | 1.7439 | 10.0 | 10110 | 1.7009 | 32.1696 | 20.0 | | 1.7603 | 11.0 | 11121 | 1.6873 | 31.8118 | 20.0 | | 1.7288 | 12.0 | 12132 | 1.6678 | 31.5711 | 20.0 | | 1.7004 | 13.0 | 13143 | 1.6482 | 31.4575 | 20.0 | | 1.6851 | 14.0 | 14154 | 1.6374 | 31.9579 | 20.0 | | 1.6497 | 15.0 | 15165 | 1.6290 | 31.4299 | 20.0 | | 1.656 | 16.0 | 16176 | 1.6130 | 31.2145 | 20.0 | | 1.6423 | 17.0 | 17187 | 1.5931 | 31.365 | 20.0 | | 1.6024 | 18.0 | 18198 | 1.5797 | 31.2247 | 20.0 | | 1.6064 | 19.0 | 19209 | 1.5736 | 31.1535 | 20.0 | | 1.5974 | 20.0 | 20220 | 1.5609 | 31.431 | 20.0 | | 1.5961 | 21.0 | 21231 | 1.5578 | 30.9905 | 20.0 | | 1.5621 | 22.0 | 22242 | 1.5466 | 30.8979 | 20.0 | | 1.5307 | 23.0 | 23253 | 1.5285 | 31.277 | 20.0 | | 1.5359 | 24.0 | 24264 | 1.5370 | 31.4321 | 20.0 | | 1.5558 | 25.0 | 25275 | 1.5215 | 31.2769 | 20.0 | | 1.513 | 26.0 | 26286 | 1.5173 | 30.9782 | 19.9997 | | 1.5241 | 27.0 | 27297 | 1.5105 | 30.6717 | 20.0 | | 1.5133 | 28.0 | 28308 | 1.4973 | 30.3152 | 20.0 | | 1.4713 | 29.0 | 29319 | 1.4927 | 30.276 | 19.9997 | | 1.478 | 30.0 | 30330 | 1.4887 | 30.1004 | 19.9989 | | 1.4572 | 31.0 | 31341 | 1.4845 | 29.8939 | 19.9983 | | 1.4485 | 32.0 | 32352 | 1.4653 | 30.0169 | 19.9986 | | 1.4404 | 33.0 | 33363 | 1.4648 | 28.9061 | 19.9989 | | 1.4408 | 34.0 | 34374 | 1.4586 | 29.598 | 19.9994 | | 1.4296 | 35.0 | 35385 | 1.4585 | 28.9821 | 19.9981 | | 1.408 | 36.0 | 36396 | 1.4517 | 29.6025 | 19.9986 | | 1.4004 | 37.0 | 37407 | 1.4456 | 27.8564 | 19.9992 | | 1.3991 | 38.0 | 38418 | 1.4411 | 28.8947 | 19.9994 | | 1.401 | 39.0 | 39429 | 1.4309 | 27.6809 | 19.9994 | | 1.391 | 40.0 | 40440 | 1.4278 | 29.1687 | 19.9994 | | 1.3709 | 41.0 | 41451 | 1.4217 | 28.2947 | 19.9989 | | 1.3726 | 42.0 | 42462 | 1.4247 | 27.2108 | 19.9983 | | 1.3702 | 43.0 | 43473 | 1.4144 | 25.9973 | 19.9981 | | 1.3636 | 44.0 | 44484 | 1.4163 | 26.0146 | 19.9953 | | 1.3673 | 45.0 | 45495 | 1.4118 | 25.8126 | 19.9978 | | 1.3539 | 46.0 | 46506 | 1.4076 | 25.5185 | 19.9981 | | 1.3434 | 47.0 | 47517 | 1.4023 | 26.2123 | 19.9947 | | 1.3428 | 48.0 | 48528 | 1.4008 | 25.8932 | 19.9955 | | 1.3325 | 49.0 | 49539 | 1.4003 | 25.7762 | 19.9969 | | 1.3258 | 50.0 | 50550 | 1.3896 | 24.8206 | 19.9961 | | 1.3151 | 51.0 | 51561 | 1.3852 | 24.4683 | 19.9978 | | 1.3035 | 52.0 | 52572 | 1.3843 | 24.9821 | 19.9992 | | 1.2931 | 53.0 | 53583 | 1.3847 | 24.715 | 19.9989 | | 1.2707 | 54.0 | 54594 | 1.3776 | 24.4374 | 19.9986 | | 1.2792 | 55.0 | 55605 | 1.3801 | 23.7683 | 19.9967 | | 1.284 | 56.0 | 56616 | 1.3781 | 23.6961 | 19.9975 | | 1.2664 | 57.0 | 57627 | 1.3680 | 23.6677 | 19.9975 | | 1.2783 | 58.0 | 58638 | 1.3695 | 23.3193 | 19.9986 | | 1.2762 | 59.0 | 59649 | 1.3741 | 22.613 | 19.9972 | | 1.2759 | 60.0 | 60660 | 1.3629 | 23.9067 | 19.9964 | | 1.2618 | 61.0 | 61671 | 1.3687 | 23.7587 | 19.9967 | | 1.2614 | 62.0 | 62682 | 1.3613 | 23.2615 | 19.9975 | | 1.2455 | 63.0 | 63693 | 1.3623 | 23.8722 | 19.9986 | | 1.1977 | 64.0 | 64704 | 1.3528 | 23.1421 | 19.9981 | | 1.2199 | 65.0 | 65715 | 1.3520 | 22.6977 | 19.9975 | | 1.2368 | 66.0 | 66726 | 1.3552 | 23.2495 | 19.9989 | | 1.2087 | 67.0 | 67737 | 1.3404 | 22.6422 | 19.9989 | | 1.214 | 68.0 | 68748 | 1.3499 | 21.979 | 19.9972 | | 1.2322 | 69.0 | 69759 | 1.3453 | 22.1766 | 19.9978 | | 1.2028 | 70.0 | 70770 | 1.3402 | 21.8311 | 19.9975 | | 1.2163 | 71.0 | 71781 | 1.3399 | 22.1417 | 19.9989 | | 1.1769 | 72.0 | 72792 | 1.3446 | 22.253 | 19.9972 | | 1.221 | 73.0 | 73803 | 1.3413 | 22.1546 | 19.9986 | | 1.1768 | 74.0 | 74814 | 1.3335 | 21.8914 | 19.9972 | | 1.1829 | 75.0 | 75825 | 1.3323 | 21.7763 | 19.9947 | | 1.1687 | 76.0 | 76836 | 1.3344 | 21.4495 | 19.9964 | | 1.1873 | 77.0 | 77847 | 1.3337 | 21.7655 | 19.9964 | | 1.1807 | 78.0 | 78858 | 1.3308 | 21.4564 | 19.9967 | | 1.1735 | 79.0 | 79869 | 1.3282 | 21.233 | 19.9967 | | 1.1693 | 80.0 | 80880 | 1.3240 | 21.0794 | 19.9955 | | 1.1714 | 81.0 | 81891 | 1.3262 | 21.1856 | 19.9969 | | 1.154 | 82.0 | 82902 | 1.3282 | 20.5583 | 19.9964 | | 1.1572 | 83.0 | 83913 | 1.3229 | 20.9262 | 19.995 | | 1.1473 | 84.0 | 84924 | 1.3233 | 20.5432 | 19.995 | | 1.1315 | 85.0 | 85935 | 1.3227 | 20.4939 | 19.9942 | | 1.1567 | 86.0 | 86946 | 1.3203 | 21.3354 | 19.9964 | | 1.1485 | 87.0 | 87957 | 1.3211 | 20.9952 | 19.9939 | | 1.1313 | 88.0 | 88968 | 1.3202 | 20.1199 | 19.9961 | | 1.1428 | 89.0 | 89979 | 1.3188 | 20.414 | 19.9925 | | 1.1374 | 90.0 | 90990 | 1.3220 | 20.003 | 19.993 | | 1.1274 | 91.0 | 92001 | 1.3153 | 20.7172 | 19.9953 | | 1.1174 | 92.0 | 93012 | 1.3126 | 20.5997 | 19.9953 | | 1.1155 | 93.0 | 94023 | 1.3131 | 20.0402 | 19.993 | | 1.1167 | 94.0 | 95034 | 1.3140 | 20.219 | 19.9905 | | 1.1301 | 95.0 | 96045 | 1.3142 | 19.8332 | 19.9922 | | 1.0975 | 96.0 | 97056 | 1.3096 | 19.6051 | 19.9942 | | 1.1025 | 97.0 | 98067 | 1.3148 | 20.4323 | 19.993 | | 1.0932 | 98.0 | 99078 | 1.3134 | 20.0839 | 19.9942 | | 1.0871 | 99.0 | 100089 | 1.3071 | 20.0202 | 19.9939 | | 1.102 | 100.0 | 101100 | 1.3091 | 20.0454 | 19.9947 | | 1.0969 | 101.0 | 102111 | 1.3090 | 19.4474 | 19.9947 | | 1.0988 | 102.0 | 103122 | 1.3117 | 20.1905 | 19.9922 | | 1.0816 | 103.0 | 104133 | 1.3048 | 20.3346 | 19.9928 | | 1.0809 | 104.0 | 105144 | 1.3058 | 20.323 | 19.9953 | | 1.0861 | 105.0 | 106155 | 1.3052 | 20.6984 | 19.9944 | | 1.0907 | 106.0 | 107166 | 1.3076 | 20.3413 | 19.9947 | | 1.0747 | 107.0 | 108177 | 1.3050 | 20.3362 | 19.9955 | | 1.0839 | 108.0 | 109188 | 1.3060 | 20.5379 | 19.9936 | | 1.0755 | 109.0 | 110199 | 1.3071 | 20.3886 | 19.9939 | | 1.0463 | 110.0 | 111210 | 1.3058 | 19.9524 | 19.9953 | | 1.0644 | 111.0 | 112221 | 1.3033 | 19.7226 | 19.9972 | | 1.0771 | 112.0 | 113232 | 1.3089 | 19.9861 | 19.9958 | | 1.0819 | 113.0 | 114243 | 1.3031 | 20.5527 | 19.9942 | | 1.0483 | 114.0 | 115254 | 1.3063 | 20.0048 | 19.9978 | | 1.04 | 115.0 | 116265 | 1.3020 | 20.2327 | 19.9969 | | 1.0574 | 116.0 | 117276 | 1.3025 | 19.6818 | 19.995 | | 1.0356 | 117.0 | 118287 | 1.3077 | 20.1054 | 19.9967 | | 1.0525 | 118.0 | 119298 | 1.3022 | 20.14 | 19.9967 | | 1.0409 | 119.0 | 120309 | 1.2983 | 19.7657 | 19.9972 | | 1.0431 | 120.0 | 121320 | 1.2945 | 20.1315 | 19.9975 | | 1.0419 | 121.0 | 122331 | 1.3035 | 19.8364 | 19.9972 | | 1.0411 | 122.0 | 123342 | 1.2951 | 20.204 | 19.9981 | | 1.0396 | 123.0 | 124353 | 1.3019 | 20.6711 | 19.9955 | | 1.0424 | 124.0 | 125364 | 1.2950 | 20.6527 | 19.9969 | | 1.0203 | 125.0 | 126375 | 1.3008 | 20.4314 | 19.9972 | | 1.0351 | 126.0 | 127386 | 1.3008 | 20.0237 | 19.9978 | | 1.0424 | 127.0 | 128397 | 1.2993 | 20.3024 | 19.9983 | | 1.0165 | 128.0 | 129408 | 1.2960 | 20.1769 | 19.9978 | | 1.0216 | 129.0 | 130419 | 1.2977 | 19.8483 | 19.9972 | | 1.0207 | 130.0 | 131430 | 1.2939 | 20.0639 | 19.9969 | | 1.0119 | 131.0 | 132441 | 1.2985 | 19.731 | 19.9972 | | 0.9965 | 132.0 | 133452 | 1.3006 | 19.5983 | 19.9969 | | 1.0034 | 133.0 | 134463 | 1.2974 | 19.6943 | 19.9989 | | 1.0241 | 134.0 | 135474 | 1.3015 | 20.0083 | 19.9981 | | 1.0181 | 135.0 | 136485 | 1.2982 | 19.6057 | 19.9989 | | 1.0112 | 136.0 | 137496 | 1.2931 | 19.3408 | 19.9986 | | 0.9927 | 137.0 | 138507 | 1.2999 | 19.5222 | 19.9983 | | 1.0134 | 138.0 | 139518 | 1.2909 | 19.42 | 19.9989 | | 0.9921 | 139.0 | 140529 | 1.2951 | 19.8604 | 19.9989 | | 0.9891 | 140.0 | 141540 | 1.2916 | 20.0752 | 19.9989 | | 0.9896 | 141.0 | 142551 | 1.2910 | 19.7536 | 19.9992 | | 1.0034 | 142.0 | 143562 | 1.2934 | 20.0064 | 19.9986 | | 0.9718 | 143.0 | 144573 | 1.2973 | 19.9304 | 19.9989 | | 1.0141 | 144.0 | 145584 | 1.2940 | 20.5053 | 19.9986 | | 0.99 | 145.0 | 146595 | 1.2980 | 20.0913 | 19.9975 | | 0.9729 | 146.0 | 147606 | 1.2927 | 19.7229 | 19.9978 | | 0.9732 | 147.0 | 148617 | 1.2920 | 20.2104 | 19.9975 | | 0.9778 | 148.0 | 149628 | 1.2947 | 20.1365 | 19.9981 | | 0.987 | 149.0 | 150639 | 1.3007 | 20.3436 | 19.9972 | | 0.987 | 150.0 | 151650 | 1.3003 | 20.2827 | 19.9983 | | 0.9788 | 151.0 | 152661 | 1.2953 | 20.2941 | 19.9972 | | 0.9899 | 152.0 | 153672 | 1.2951 | 20.5454 | 19.9978 | | 0.978 | 153.0 | 154683 | 1.2946 | 20.7448 | 19.9969 | | 0.9614 | 154.0 | 155694 | 1.2975 | 20.5359 | 19.9969 | | 0.9759 | 155.0 | 156705 | 1.2925 | 20.3661 | 19.9975 | | 0.9627 | 156.0 | 157716 | 1.2954 | 20.5535 | 19.9969 | | 0.9692 | 157.0 | 158727 | 1.2930 | 20.1919 | 19.9969 | | 0.9737 | 158.0 | 159738 | 1.2922 | 20.484 | 19.9972 | | 0.9642 | 159.0 | 160749 | 1.2952 | 20.5444 | 19.9975 | | 0.9679 | 160.0 | 161760 | 1.2930 | 20.3731 | 19.9983 | | 0.9571 | 161.0 | 162771 | 1.2933 | 20.4158 | 19.9978 | | 0.9542 | 162.0 | 163782 | 1.2937 | 20.4823 | 19.9978 | | 0.9537 | 163.0 | 164793 | 1.2997 | 20.6457 | 19.9964 | | 0.951 | 164.0 | 165804 | 1.2982 | 20.0897 | 19.9986 | | 0.9556 | 165.0 | 166815 | 1.2944 | 20.45 | 19.9986 | | 0.9534 | 166.0 | 167826 | 1.2961 | 20.2743 | 19.9967 | | 0.9381 | 167.0 | 168837 | 1.2922 | 19.8311 | 19.9969 | | 0.9347 | 168.0 | 169848 | 1.2938 | 19.9427 | 19.9978 | | 0.9514 | 169.0 | 170859 | 1.2968 | 20.2039 | 19.9983 | | 0.9439 | 170.0 | 171870 | 1.3014 | 19.9784 | 19.9961 | | 0.9379 | 171.0 | 172881 | 1.3000 | 20.1213 | 19.9964 | | 0.9326 | 172.0 | 173892 | 1.2930 | 20.0882 | 19.9969 | | 0.9178 | 173.0 | 174903 | 1.2942 | 20.1997 | 19.9972 | | 0.9511 | 174.0 | 175914 | 1.2931 | 20.6471 | 19.9969 | | 0.9438 | 175.0 | 176925 | 1.2945 | 20.7321 | 19.9983 | | 0.929 | 176.0 | 177936 | 1.2967 | 20.5813 | 19.9964 | | 0.9343 | 177.0 | 178947 | 1.2940 | 20.2307 | 19.9978 | | 0.9344 | 178.0 | 179958 | 1.2949 | 20.2401 | 19.9969 | | 0.9319 | 179.0 | 180969 | 1.2974 | 19.9881 | 19.9972 | | 0.9286 | 180.0 | 181980 | 1.2974 | 20.2666 | 19.9961 | | 0.9074 | 181.0 | 182991 | 1.2939 | 20.2549 | 19.9969 | | 0.93 | 182.0 | 184002 | 1.2990 | 20.0121 | 19.9969 | | 0.9303 | 183.0 | 185013 | 1.2944 | 20.056 | 19.9978 | | 0.9259 | 184.0 | 186024 | 1.3003 | 19.9021 | 19.9953 | | 0.9014 | 185.0 | 187035 | 1.2962 | 20.0381 | 19.9958 | | 0.9288 | 186.0 | 188046 | 1.2976 | 20.1909 | 19.9947 | | 0.9086 | 187.0 | 189057 | 1.2969 | 20.2923 | 19.9969 | | 0.9183 | 188.0 | 190068 | 1.2941 | 20.1649 | 19.9967 | | 0.9141 | 189.0 | 191079 | 1.3028 | 20.0891 | 19.9958 | | 0.9264 | 190.0 | 192090 | 1.2935 | 20.0164 | 19.9958 | | 0.9307 | 191.0 | 193101 | 1.2956 | 19.8606 | 19.9964 | | 0.9179 | 192.0 | 194112 | 1.2933 | 19.9815 | 19.9961 | | 0.9123 | 193.0 | 195123 | 1.2977 | 20.1232 | 19.9953 | | 0.9221 | 194.0 | 196134 | 1.3014 | 20.0674 | 19.995 | | 0.9195 | 195.0 | 197145 | 1.3031 | 19.9839 | 19.9944 | | 0.9139 | 196.0 | 198156 | 1.2947 | 20.0344 | 19.9953 | | 0.9074 | 197.0 | 199167 | 1.2956 | 20.1076 | 19.9961 | | 0.9149 | 198.0 | 200178 | 1.2963 | 20.0898 | 19.9955 | | 0.9219 | 199.0 | 201189 | 1.2990 | 20.171 | 19.9964 | | 0.8989 | 200.0 | 202200 | 1.2983 | 20.1548 | 19.9961 | | 0.9004 | 201.0 | 203211 | 1.2977 | 20.2135 | 19.9955 | | 0.9043 | 202.0 | 204222 | 1.3023 | 20.3024 | 19.9964 | | 0.917 | 203.0 | 205233 | 1.3014 | 20.5967 | 19.9967 | | 0.9012 | 204.0 | 206244 | 1.3001 | 20.5489 | 19.9961 | | 0.9136 | 205.0 | 207255 | 1.2963 | 20.5013 | 19.9969 | | 0.897 | 206.0 | 208266 | 1.3016 | 20.3285 | 19.9969 | | 0.9036 | 207.0 | 209277 | 1.2981 | 20.3278 | 19.9967 | | 0.9225 | 208.0 | 210288 | 1.3055 | 20.4756 | 19.9967 | | 0.8959 | 209.0 | 211299 | 1.2987 | 20.3112 | 19.9972 | | 0.903 | 210.0 | 212310 | 1.2977 | 20.5512 | 19.9961 | | 0.9012 | 211.0 | 213321 | 1.3026 | 20.4304 | 19.9964 | | 0.8906 | 212.0 | 214332 | 1.2998 | 20.4206 | 19.9967 | | 0.8906 | 213.0 | 215343 | 1.3031 | 20.4499 | 19.9964 | | 0.9049 | 214.0 | 216354 | 1.3029 | 20.6908 | 19.9958 | | 0.9034 | 215.0 | 217365 | 1.2980 | 20.3614 | 19.9969 | | 0.8971 | 216.0 | 218376 | 1.2985 | 20.6196 | 19.9972 | | 0.885 | 217.0 | 219387 | 1.3019 | 20.584 | 19.9972 | | 0.8799 | 218.0 | 220398 | 1.3041 | 20.5843 | 19.9967 | | 0.8805 | 219.0 | 221409 | 1.3035 | 20.5123 | 19.9972 | | 0.8896 | 220.0 | 222420 | 1.3006 | 20.7331 | 19.9975 | | 0.8851 | 221.0 | 223431 | 1.2973 | 20.6914 | 19.9975 | | 0.893 | 222.0 | 224442 | 1.3004 | 20.7484 | 19.9978 | | 0.8903 | 223.0 | 225453 | 1.3001 | 20.5207 | 19.9981 | | 0.8924 | 224.0 | 226464 | 1.3026 | 20.6635 | 19.9972 | | 0.8839 | 225.0 | 227475 | 1.3056 | 20.6999 | 19.9978 | | 0.8631 | 226.0 | 228486 | 1.3042 | 20.9581 | 19.9967 | | 0.8677 | 227.0 | 229497 | 1.3037 | 20.8283 | 19.9964 | | 0.867 | 228.0 | 230508 | 1.3042 | 20.8781 | 19.9978 | | 0.8878 | 229.0 | 231519 | 1.3035 | 20.6884 | 19.9981 | | 0.8805 | 230.0 | 232530 | 1.3092 | 20.716 | 19.9975 | | 0.8769 | 231.0 | 233541 | 1.2988 | 20.6323 | 19.9975 | | 0.8833 | 232.0 | 234552 | 1.3039 | 20.5529 | 19.9978 | | 0.8798 | 233.0 | 235563 | 1.3028 | 20.5848 | 19.9981 | | 0.8694 | 234.0 | 236574 | 1.3037 | 20.4147 | 19.9983 | | 0.8888 | 235.0 | 237585 | 1.3022 | 20.5179 | 19.9983 | | 0.8724 | 236.0 | 238596 | 1.3027 | 20.4379 | 19.9978 | | 0.8864 | 237.0 | 239607 | 1.3024 | 20.3993 | 19.9972 | | 0.8684 | 238.0 | 240618 | 1.3043 | 20.5063 | 19.9969 | | 0.8753 | 239.0 | 241629 | 1.3072 | 20.4079 | 19.9969 | | 0.8734 | 240.0 | 242640 | 1.3026 | 20.5173 | 19.9967 | | 0.867 | 241.0 | 243651 | 1.3044 | 20.6249 | 19.9972 | | 0.8671 | 242.0 | 244662 | 1.3094 | 20.6827 | 19.9972 | | 0.8721 | 243.0 | 245673 | 1.3045 | 20.5017 | 19.9978 | | 0.8726 | 244.0 | 246684 | 1.3065 | 20.5748 | 19.9967 | | 0.8741 | 245.0 | 247695 | 1.3063 | 20.5345 | 19.9972 | | 0.8634 | 246.0 | 248706 | 1.3036 | 20.6084 | 19.9972 | | 0.8527 | 247.0 | 249717 | 1.3045 | 20.535 | 19.9972 | | 0.8662 | 248.0 | 250728 | 1.3089 | 20.5306 | 19.9972 | | 0.8681 | 249.0 | 251739 | 1.3081 | 20.6414 | 19.9967 | | 0.8711 | 250.0 | 252750 | 1.3061 | 20.6039 | 19.9975 | | 0.8653 | 251.0 | 253761 | 1.3018 | 20.5632 | 19.9975 | | 0.8697 | 252.0 | 254772 | 1.3090 | 20.5056 | 19.9978 | | 0.8655 | 253.0 | 255783 | 1.3082 | 20.5235 | 19.9978 | | 0.8636 | 254.0 | 256794 | 1.3067 | 20.5607 | 19.9972 | | 0.8667 | 255.0 | 257805 | 1.3066 | 20.6694 | 19.9964 | | 0.8596 | 256.0 | 258816 | 1.3073 | 20.617 | 19.9967 | | 0.8507 | 257.0 | 259827 | 1.3083 | 20.6035 | 19.9964 | | 0.8677 | 258.0 | 260838 | 1.3077 | 20.6196 | 19.9975 | | 0.8614 | 259.0 | 261849 | 1.3094 | 20.6928 | 19.9969 | | 0.8677 | 260.0 | 262860 | 1.3098 | 20.7181 | 19.9969 | | 0.8628 | 261.0 | 263871 | 1.3065 | 20.679 | 19.9975 | | 0.8636 | 262.0 | 264882 | 1.3055 | 20.7476 | 19.9975 | | 0.8624 | 263.0 | 265893 | 1.3065 | 20.7045 | 19.9972 | | 0.8594 | 264.0 | 266904 | 1.3093 | 20.5442 | 19.9964 | | 0.8658 | 265.0 | 267915 | 1.3105 | 20.7153 | 19.9972 | | 0.8476 | 266.0 | 268926 | 1.3076 | 20.677 | 19.9972 | | 0.858 | 267.0 | 269937 | 1.3091 | 20.6701 | 19.9969 | | 0.8707 | 268.0 | 270948 | 1.3111 | 20.5985 | 19.9975 | | 0.8613 | 269.0 | 271959 | 1.3092 | 20.6108 | 19.9975 | | 0.8497 | 270.0 | 272970 | 1.3070 | 20.5836 | 19.9964 | | 0.8654 | 271.0 | 273981 | 1.3082 | 20.5806 | 19.9983 | | 0.8621 | 272.0 | 274992 | 1.3088 | 20.6817 | 19.9975 | | 0.8619 | 273.0 | 276003 | 1.3090 | 20.5567 | 19.9975 | | 0.8638 | 274.0 | 277014 | 1.3087 | 20.6233 | 19.9975 | | 0.8642 | 275.0 | 278025 | 1.3092 | 20.667 | 19.9967 | | 0.8498 | 276.0 | 279036 | 1.3069 | 20.6295 | 19.9969 | | 0.8572 | 277.0 | 280047 | 1.3107 | 20.6376 | 19.9969 | | 0.8543 | 278.0 | 281058 | 1.3114 | 20.6473 | 19.9964 | | 0.8453 | 279.0 | 282069 | 1.3105 | 20.6931 | 19.9967 | | 0.8575 | 280.0 | 283080 | 1.3077 | 20.691 | 19.9972 | | 0.8492 | 281.0 | 284091 | 1.3101 | 20.7528 | 19.9969 | | 0.8519 | 282.0 | 285102 | 1.3094 | 20.6812 | 19.9981 | | 0.8431 | 283.0 | 286113 | 1.3114 | 20.6608 | 19.9969 | | 0.8546 | 284.0 | 287124 | 1.3093 | 20.6336 | 19.9981 | | 0.86 | 285.0 | 288135 | 1.3108 | 20.6077 | 19.9967 | | 0.8674 | 286.0 | 289146 | 1.3096 | 20.6742 | 19.9978 | | 0.8493 | 287.0 | 290157 | 1.3106 | 20.6674 | 19.9981 | | 0.8521 | 288.0 | 291168 | 1.3099 | 20.5915 | 19.9981 | | 0.856 | 289.0 | 292179 | 1.3102 | 20.6448 | 19.9978 | | 0.8614 | 290.0 | 293190 | 1.3096 | 20.6515 | 19.9981 | | 0.8628 | 291.0 | 294201 | 1.3108 | 20.6679 | 19.9978 | | 0.8498 | 292.0 | 295212 | 1.3104 | 20.6623 | 19.9978 | | 0.8617 | 293.0 | 296223 | 1.3097 | 20.6591 | 19.9978 | | 0.8563 | 294.0 | 297234 | 1.3098 | 20.6266 | 19.9978 | | 0.856 | 295.0 | 298245 | 1.3095 | 20.6536 | 19.9978 | | 0.8493 | 296.0 | 299256 | 1.3095 | 20.6273 | 19.9978 | | 0.8498 | 297.0 | 300267 | 1.3092 | 20.5942 | 19.9978 | | 0.8539 | 298.0 | 301278 | 1.3092 | 20.5942 | 19.9978 | | 0.8608 | 299.0 | 302289 | 1.3091 | 20.5915 | 19.9981 | | 0.8437 | 300.0 | 303300 | 1.3091 | 20.5964 | 19.9981 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
28c3330c17ba5418aaaa8205ba29ffc8
Khanh/bert-base-multilingual-cased-finetuned-squad
Khanh
bert
12
5
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,294
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1782 | 1.0 | 579 | 0.5258 | | 0.4938 | 2.0 | 1158 | 0.4639 | | 0.32 | 3.0 | 1737 | 0.4919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
928b7c1e6fdb9918b878ac720af2ac85
spaablauw/ActionHelper
spaablauw
null
3
0
null
15
null
false
false
false
wtfpl
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,558
false
Trained for 500 steps with a lr of 0.003 and 4 steps gradient accumulation. ![08039-3409504356-portrait of woman in trenchcoat, city bokeh background, art by bforangeteal, extremely detailed, embers, debris, art by photohel.png](https://s3.amazonaws.com/moonup/production/uploads/1670555250809-6312579fc7577b68d90a7646.png) ![08009-1360552088-ford mustang, city bokeh background, art by bforangeteal, extremely detailed, embers, debris, art by photohelper, nukesd.png](https://s3.amazonaws.com/moonup/production/uploads/1670555259179-6312579fc7577b68d90a7646.png) ![07949-29151249-portrait of gigachad, city bokeh background, art by bforangeteal, extremely detailed, embers, debris, art by photohelper, mascul.png](https://s3.amazonaws.com/moonup/production/uploads/1670555280835-6312579fc7577b68d90a7646.png) ![07868-2761092669-headshot portrait of henry cavill in general uniform, city bokeh background, art by actionhelper, extremely detailed, embers, de.png](https://s3.amazonaws.com/moonup/production/uploads/1670555326731-6312579fc7577b68d90a7646.png) ![07874-1633627578-headshot portrait of john wick in uniform, city bokeh background, art by actionhelper, extremely detailed, embers, debris, art b.png](https://s3.amazonaws.com/moonup/production/uploads/1670555331232-6312579fc7577b68d90a7646.png) ![08016-1561599122-fighter jet flying, city bokeh background, art by bforangeteal, extremely detailed, embers, debris, art by photohelper, nukesd.png](https://s3.amazonaws.com/moonup/production/uploads/1670555384383-6312579fc7577b68d90a7646.png)
6e3195d1e3a18a778c2d8f9b43679c23
eduardopds/marian-finetuned-kde4-en-to-fr
eduardopds
marian
9
1
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,463
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # eduardopds/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6855 - Validation Loss: 0.8096 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0600 | 0.8815 | 0 | | 0.7981 | 0.8266 | 1 | | 0.6855 | 0.8096 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
83e5f694c42273d091c9c2df63739ffc
KuroTuyuri/kantoku-artstyle
KuroTuyuri
null
21
237
diffusers
20
text-to-image
false
false
false
creativeml-openrail-m
['ja', 'en']
null
null
2
1
1
0
0
0
0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
true
true
1,853
false
# KANTOKU V1.5 KANTOKU V1.5にようこそ。本モデルは[Anything V3](https://huggingface.co/Linaqruf/anything-v3.0)を調整し作られた2Dアニメ生成モデルです。 本モデルを使用することによりカントク風の絵を出力することができます。本モデルはv1の改良版です。 Pronptでの呪文はkantokuです!出したいアイデアを先にプロンプトに入力してください。その後にこの呪文を追加するとより良い結果が得られます。 トークンはカントクです。 Welcome to KANTOKU V1. This model is a 2D animation generator model based on the Anything V3 model. By using this model, you can output KANTOKU style pictures. The spell in Pronpt is kantoku! Enter the idea you want to produce into the prompt first. Adding this spell afterwards will give you better results. The token is kantoku. 例えば **_masterpiece, 1girl, white hair, kimono, kantoku_** ## サンプル Sample **学生服の少女** ![学生服の少女](https://huggingface.co/KuroTuyuri/kantoku-v1-5/resolve/main/sample_images/download%20(43).png) ![](https://huggingface.co/KuroTuyuri/kantoku-v1-5/resolve/main/sample_images/download%20(44).png) **白髪** ![白髪](https://huggingface.co/KuroTuyuri/kantoku-v1-5/resolve/main/sample_images/download_(48).png) ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
14fb9385ad727bb6c38f342602c0ef9f
espnet/kan-bayashi_jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jac-truncated-6f4cf5
espnet
null
21
3
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['ja']
['jsut']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
1,875
false
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4391405/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
84735caebdb71824f5469a98a3d92bae
IDEA-CCNL/Wenzhong-GPT2-110M
IDEA-CCNL
gpt2
9
1,781
transformers
7
text-generation
true
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
['generate', 'gpt2']
false
true
true
3,024
false
# Wenzhong-GPT2-110M - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/) ## 简介 Brief Introduction 善于处理NLG任务,中文版的GPT2-Small。 Focused on handling NLG tasks, Chinese GPT2-Small. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言生成 NLG | 闻仲 Wenzhong | GPT2 | 110M | 中文 Chinese | ## 模型信息 Model Information 类似于Wenzhong2.0-GPT2-3.5B-chinese,我们实现了一个small版本的12层的Wenzhong-GPT2-110M,并且在悟道(300G版本)上面进行预训练。 Similar to Wenzhong2.0-GPT2-3.5B-chinese, we implement a small size Wenzhong-GPT2-110M with 12 layers, which is pre-trained on Wudao Corpus (300G version). ## 使用 Usage ### 加载模型 Loading Models ```python from transformers import GPT2Tokenizer,GPT2LMHeadModel hf_model_path = 'IDEA-CCNL/Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) model = GPT2LMHeadModel.from_pretrained(hf_model_path) ``` ### 使用示例 Usage Examples ```python question = "北京是中国的" inputs = tokenizer(question,return_tensors='pt') generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_length=150, # max_new_tokens=80, do_sample=True, top_p = 0.6, # num_beams=5, eos_token_id=50256, pad_token_id=0, num_return_sequences = 5) for idx,sentence in enumerate(generation_output.sequences): print('next sentence %d:\n'%idx, tokenizer.decode(sentence).split('<|endoftext|>')[0]) print('*'*40) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
6916bf6ffc5d3e6579b8696130b01c16
anas-awadalla/bart-base-few-shot-k-256-finetuned-squad-seq2seq-seed-4
anas-awadalla
bart
18
15
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
963
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-few-shot-k-256-finetuned-squad-seq2seq-seed-4 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 35.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
5529c29e7954815e2bd6a60f8f021ad7
jonatasgrosman/exp_w2v2t_de_wavlm_s824
jonatasgrosman
wavlm
10
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
439
false
# exp_w2v2t_de_wavlm_s824 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d486f4116857fd2317e36f8e383a4578
SkyR/hing-roberta-ours-run-5
SkyR
xlm-roberta
9
2
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,085
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hing-roberta-ours-run-5 This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0980 - Accuracy: 0.725 - Precision: 0.6881 - Recall: 0.6575 - F1: 0.6651 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9336 | 1.0 | 200 | 0.7394 | 0.675 | 0.6450 | 0.6509 | 0.6398 | | 0.6924 | 2.0 | 400 | 0.9530 | 0.66 | 0.6285 | 0.5845 | 0.5551 | | 0.4406 | 3.0 | 600 | 0.8914 | 0.68 | 0.6462 | 0.6527 | 0.6479 | | 0.2493 | 4.0 | 800 | 1.7083 | 0.68 | 0.6441 | 0.6446 | 0.6426 | | 0.1231 | 5.0 | 1000 | 1.9496 | 0.695 | 0.6570 | 0.6624 | 0.6591 | | 0.0788 | 6.0 | 1200 | 2.5025 | 0.67 | 0.6209 | 0.6039 | 0.6011 | | 0.0408 | 7.0 | 1400 | 2.2651 | 0.695 | 0.6594 | 0.6617 | 0.6517 | | 0.0434 | 8.0 | 1600 | 2.4072 | 0.725 | 0.6941 | 0.6754 | 0.6710 | | 0.0074 | 9.0 | 1800 | 2.7817 | 0.7 | 0.6535 | 0.6467 | 0.6488 | | 0.023 | 10.0 | 2000 | 2.8578 | 0.7 | 0.6470 | 0.6353 | 0.6337 | | 0.0151 | 11.0 | 2200 | 2.7783 | 0.695 | 0.6457 | 0.6373 | 0.6390 | | 0.0108 | 12.0 | 2400 | 2.5953 | 0.695 | 0.6563 | 0.6586 | 0.6564 | | 0.0192 | 13.0 | 2600 | 3.0715 | 0.705 | 0.6631 | 0.6326 | 0.6320 | | 0.0149 | 14.0 | 2800 | 3.1048 | 0.715 | 0.6769 | 0.6450 | 0.6503 | | 0.0205 | 15.0 | 3000 | 2.7812 | 0.71 | 0.6657 | 0.6538 | 0.6565 | | 0.0024 | 16.0 | 3200 | 2.9304 | 0.72 | 0.6796 | 0.6537 | 0.6610 | | 0.0033 | 17.0 | 3400 | 2.7170 | 0.73 | 0.6899 | 0.6760 | 0.6811 | | 0.0056 | 18.0 | 3600 | 2.9693 | 0.72 | 0.6783 | 0.6560 | 0.6628 | | 0.0015 | 19.0 | 3800 | 3.0943 | 0.72 | 0.6825 | 0.6541 | 0.6611 | | 0.0017 | 20.0 | 4000 | 3.0980 | 0.725 | 0.6881 | 0.6575 | 0.6651 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Tokenizers 0.13.2
c5cf0eb5e5c3fdc1d79a6f681acf3a41
sd-concepts-library/sewerslvt
sd-concepts-library
null
10
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,114
false
### Sewerslvt on Stable Diffusion This is the `Sewerslvt` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![Sewerslvt 0](https://huggingface.co/sd-concepts-library/sewerslvt/resolve/main/concept_images/0.jpeg) ![Sewerslvt 1](https://huggingface.co/sd-concepts-library/sewerslvt/resolve/main/concept_images/2.jpeg) ![Sewerslvt 2](https://huggingface.co/sd-concepts-library/sewerslvt/resolve/main/concept_images/4.jpeg) ![Sewerslvt 3](https://huggingface.co/sd-concepts-library/sewerslvt/resolve/main/concept_images/1.jpeg) ![Sewerslvt 4](https://huggingface.co/sd-concepts-library/sewerslvt/resolve/main/concept_images/3.jpeg)
675c2c6dabc315a6325395f48332d6f6
research-backup/t5-small-squadshifts-vanilla-amazon-qg
research-backup
t5
34
1
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['en']
['lmqg/qg_squadshifts']
null
0
0
0
0
0
0
0
['question generation']
true
true
true
4,115
false
# Model Card of `research-backup/t5-small-squadshifts-vanilla-amazon-qg` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: amazon) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (amazon) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="research-backup/t5-small-squadshifts-vanilla-amazon-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-small-squadshifts-vanilla-amazon-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squadshifts-vanilla-amazon-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 81.77 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 4.56 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 1.45 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 0.6 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 0.3 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 5.27 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 50.5 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 5.59 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: amazon - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 32 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squadshifts-vanilla-amazon-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
b67de4b40ddf8533a557dd30fdaaffa3
CarpetCleaningLewisvilleTX/CarpetCleaningLewisvilleTX
CarpetCleaningLewisvilleTX
null
2
0
null
0
null
false
false
false
other
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,106
false
Carpet Cleaning Lewisville TX https://carpetcleaninglewisville.com/ 972-338-5376 Could it be said that you are searching for productive and Modest Floor covering Cleaning? You ought to know that there's just a single spot for you to call: cover Cleaning Lewisville, TX. Appreciate top cleaning that is likewise eco-accommodating from proficient cleaners today. You should simply call our number and book your visit.Pet steam cleaner is the most effective way for pet stain expulsion as well as spot evacuation, stain evacuation, wine stain expulsion, and even smell expulsion. Steam cleaning has ended up being far more effective than the other compound techniques that don't just demolish our floor coverings over the long haul yet additionally hurt your skin and take a ton of effort.On the other hand, steam cleaning is an eco-accommodating green cleaning strategy that productively arrives at the profound spots in your rugs and totally eliminates any stain. Also, it is protected and modest, and you won't have to put forth any attempt. Cover Cleaning Lewisville, TX, will thoroughly take care of you.
866b6e4084f15008dee83af5ca2e9bf8
muzamil47/wav2vec2-large-xlsr-53-arabic-demo
muzamil47
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['arabic_speech_corpus', 'mozilla-foundation/common_voice_6_1']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
6,438
false
# Wav2Vec2-Large-XLSR-53-Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import librosa import torch from lang_trans.arabic import buckwalter from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor asr_model = "muzamil47/wav2vec2-large-xlsr-53-arabic-demo" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def load_file_to_data(file, srate=16_000): batch = {} speech, sampling_rate = librosa.load(file, sr=srate) batch["speech"] = speech batch["sampling_rate"] = sampling_rate return batch processor = Wav2Vec2Processor.from_pretrained(asr_model) model = Wav2Vec2ForCTC.from_pretrained(asr_model).to(device) def predict(data): features = processor(data["speech"], sampling_rate=data["sampling_rate"], return_tensors="pt", padding=True) input_values = features.input_values.to(device) try: attention_mask = features.attention_mask.to(device) except: attention_mask = None with torch.no_grad(): predicted = torch.argmax(model(input_values, attention_mask=attention_mask).logits, dim=-1) data["predicted"] = processor.tokenizer.decode(predicted[0]) print("predicted:", buckwalter.untrans(data["predicted"])) return data predict(load_file_to_data("common_voice_ar_19058307.mp3")) ``` **Output Result**: ```shell predicted: هل يمكنني التحدث مع المسؤول هنا ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset from lang_trans.arabic import buckwalter from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor asr_model = "muzamil47/wav2vec2-large-xlsr-53-arabic-demo" dataset = load_dataset("common_voice", "ar", split="test[:10]") resamplers = { # all three sampling rates exist in test split 48000: torchaudio.transforms.Resample(48000, 16000), 44100: torchaudio.transforms.Resample(44100, 16000), 32000: torchaudio.transforms.Resample(32000, 16000), } def prepare_example(example): speech, sampling_rate = torchaudio.load(example["path"]) example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy() return example dataset = dataset.map(prepare_example) processor = Wav2Vec2Processor.from_pretrained(asr_model) model = Wav2Vec2ForCTC.from_pretrained(asr_model).eval() def predict(batch): inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True) with torch.no_grad(): predicted = torch.argmax(model(inputs.input_values).logits, dim=-1) predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script batch["predicted"] = processor.tokenizer.batch_decode(predicted) return batch dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"]) for reference, predicted in zip(dataset["sentence"], dataset["predicted"]): print("reference:", reference) print("predicted:", buckwalter.untrans(predicted)) print("--") ``` **Output Results**: ```shell reference: ما أطول عودك! predicted: ما اطول عودك reference: ماتت عمتي منذ سنتين. predicted: ما تتعمتي منذو سنتين reference: الألمانية ليست لغة سهلة. predicted: الالمانية ليست لغة سهلة reference: طلبت منه أن يبعث الكتاب إلينا. predicted: طلبت منه ان يبعث الكتاب الينا reference: .السيد إيتو رجل متعلم predicted: السيد ايتو رجل متعلم reference: الحمد لله. predicted: الحمذ لللا reference: في الوقت نفسه بدأت الرماح والسهام تقع بين الغزاة predicted: في الوقت نفسه ابدات الرماح و السهام تقع بين الغزاء reference: لا أريد أن أكون ثقيلَ الظِّل ، أريد أن أكون رائعًا! ! predicted: لا اريد ان اكون ثقيل الظل اريد ان اكون رائع reference: خذ مظلة معك في حال أمطرت. predicted: خذ مظلة معك في حال امطرت reference: .ركب توم السيارة predicted: ركب توم السيارة ``` The model evaluation **(WER)** on the Arabic test data of Common Voice. ```python import re import torch import torchaudio from datasets import load_dataset, load_metric from transformers import set_seed, Wav2Vec2ForCTC, Wav2Vec2Processor set_seed(42) test_dataset = load_dataset("common_voice", "ar", split="test") processor = Wav2Vec2Processor.from_pretrained("muzamil47/wav2vec2-large-xlsr-53-arabic-demo") model = Wav2Vec2ForCTC.from_pretrained("muzamil47/wav2vec2-large-xlsr-53-arabic-demo") model.to("cuda") chars_to_ignore_regex = '[\,\؟\.\!\-\;\\:\'\"\☭\«\»\؛\—\ـ\_\،\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() batch["sentence"] = re.sub('[a-z]','',batch["sentence"]) batch["sentence"] = re.sub("[إأٱآا]", "ا", batch["sentence"]) noise = re.compile(""" ّ | # Tashdid َ | # Fatha ً | # Tanwin Fath ُ | # Damma ٌ | # Tanwin Damm ِ | # Kasra ٍ | # Tanwin Kasr ْ | # Sukun ـ # Tatwil/Kashida """, re.VERBOSE) batch["sentence"] = re.sub(noise, '', batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) wer = load_metric("wer") print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 53.54
9dc9a6accba8373637a46be2e9498898
SimulSt/distilbert-base-uncased-finetuned-emotion
SimulSt
distilbert
30
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,344
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.925 - F1: 0.9250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8285 | 1.0 | 250 | 0.3203 | 0.905 | 0.9008 | | 0.2544 | 2.0 | 500 | 0.2202 | 0.925 | 0.9250 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
8ecef72315edcd1bf460c7eb3303b47d
garnagar/whisper-ft-libri-en
garnagar
whisper
27
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['librispeech_asr']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
6,117
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-ft-libri-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.8069 - Wer: 31.6163 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.740176574997311e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 2.1717 | 0.38 | 5 | 2.1709 | 98.0462 | | 1.2371 | 0.77 | 10 | 1.2719 | 79.9290 | | 0.7577 | 1.15 | 15 | 1.0510 | 35.3464 | | 0.5325 | 1.54 | 20 | 0.9475 | 32.6821 | | 0.5545 | 1.92 | 25 | 0.8607 | 30.3730 | | 0.2957 | 2.31 | 30 | 0.8051 | 33.3925 | | 0.1846 | 2.69 | 35 | 0.7487 | 30.1954 | | 0.0748 | 3.08 | 40 | 0.6882 | 32.1492 | | 0.0709 | 3.46 | 45 | 0.6692 | 31.2611 | | 0.0908 | 3.85 | 50 | 0.6465 | 29.4849 | | 0.0764 | 4.23 | 55 | 0.6578 | 28.9520 | | 0.0259 | 4.62 | 60 | 0.6637 | 30.0178 | | 0.0178 | 5.0 | 65 | 0.6955 | 30.3730 | | 0.0131 | 5.38 | 70 | 0.6869 | 33.2149 | | 0.0162 | 5.77 | 75 | 0.7000 | 32.3268 | | 0.0081 | 6.15 | 80 | 0.6814 | 32.3268 | | 0.0075 | 6.54 | 85 | 0.6897 | 31.0835 | | 0.0069 | 6.92 | 90 | 0.7151 | 32.6821 | | 0.0062 | 7.31 | 95 | 0.7181 | 30.3730 | | 0.0056 | 7.69 | 100 | 0.7173 | 30.0178 | | 0.0052 | 8.08 | 105 | 0.7411 | 31.9716 | | 0.0073 | 8.46 | 110 | 0.7526 | 32.5044 | | 0.0061 | 8.85 | 115 | 0.7467 | 32.8597 | | 0.0034 | 9.23 | 120 | 0.7314 | 31.7940 | | 0.0122 | 9.62 | 125 | 0.7276 | 31.7940 | | 0.0429 | 10.0 | 130 | 0.7417 | 32.5044 | | 0.0032 | 10.38 | 135 | 0.7555 | 31.9716 | | 0.0141 | 10.77 | 140 | 0.7636 | 31.2611 | | 0.0038 | 11.15 | 145 | 0.7607 | 31.9716 | | 0.0038 | 11.54 | 150 | 0.7716 | 33.0373 | | 0.0035 | 11.92 | 155 | 0.7985 | 34.2806 | | 0.0038 | 12.31 | 160 | 0.7797 | 32.1492 | | 0.0036 | 12.69 | 165 | 0.7767 | 31.4387 | | 0.0022 | 13.08 | 170 | 0.7830 | 31.7940 | | 0.0033 | 13.46 | 175 | 0.7992 | 30.7282 | | 0.0019 | 13.85 | 180 | 0.7541 | 30.0178 | | 0.0016 | 14.23 | 185 | 0.7587 | 30.0178 | | 0.0027 | 14.62 | 190 | 0.7766 | 30.3730 | | 0.0016 | 15.0 | 195 | 0.8056 | 32.8597 | | 0.0015 | 15.38 | 200 | 0.8096 | 32.5044 | | 0.0012 | 15.77 | 205 | 0.7931 | 32.6821 | | 0.001 | 16.15 | 210 | 0.7829 | 31.6163 | | 0.0045 | 16.54 | 215 | 0.7774 | 30.9059 | | 0.0009 | 16.92 | 220 | 0.7750 | 30.1954 | | 0.0009 | 17.31 | 225 | 0.7780 | 28.9520 | | 0.0008 | 17.69 | 230 | 0.7803 | 29.1297 | | 0.0007 | 18.08 | 235 | 0.7807 | 29.6625 | | 0.0025 | 18.46 | 240 | 0.7813 | 30.1954 | | 0.0007 | 18.85 | 245 | 0.7840 | 30.0178 | | 0.0006 | 19.23 | 250 | 0.7860 | 30.0178 | | 0.0007 | 19.62 | 255 | 0.7839 | 30.1954 | | 0.0005 | 20.0 | 260 | 0.7834 | 30.1954 | | 0.0006 | 20.38 | 265 | 0.7844 | 30.3730 | | 0.0102 | 20.77 | 270 | 0.7859 | 30.7282 | | 0.0006 | 21.15 | 275 | 0.7901 | 30.7282 | | 0.0006 | 21.54 | 280 | 0.7950 | 30.7282 | | 0.0006 | 21.92 | 285 | 0.7975 | 31.0835 | | 0.0006 | 22.31 | 290 | 0.7984 | 30.7282 | | 0.0006 | 22.69 | 295 | 0.7954 | 30.3730 | | 0.0005 | 23.08 | 300 | 0.7935 | 31.0835 | | 0.0005 | 23.46 | 305 | 0.7928 | 31.0835 | | 0.0005 | 23.85 | 310 | 0.7933 | 31.2611 | | 0.0038 | 24.23 | 315 | 0.7950 | 30.9059 | | 0.0005 | 24.62 | 320 | 0.7976 | 31.6163 | | 0.0004 | 25.0 | 325 | 0.7995 | 31.7940 | | 0.0004 | 25.38 | 330 | 0.8006 | 31.4387 | | 0.0004 | 25.77 | 335 | 0.8005 | 31.6163 | | 0.0005 | 26.15 | 340 | 0.8011 | 31.4387 | | 0.0004 | 26.54 | 345 | 0.8020 | 31.6163 | | 0.0004 | 26.92 | 350 | 0.8024 | 31.4387 | | 0.0017 | 27.31 | 355 | 0.8029 | 31.4387 | | 0.0004 | 27.69 | 360 | 0.8035 | 31.4387 | | 0.0004 | 28.08 | 365 | 0.8045 | 31.4387 | | 0.0004 | 28.46 | 370 | 0.8049 | 31.4387 | | 0.0004 | 28.85 | 375 | 0.8056 | 31.4387 | | 0.0011 | 29.23 | 380 | 0.8060 | 31.4387 | | 0.0004 | 29.62 | 385 | 0.8065 | 31.4387 | | 0.0004 | 30.0 | 390 | 0.8065 | 31.4387 | | 0.0004 | 30.38 | 395 | 0.8068 | 31.4387 | | 0.0004 | 30.77 | 400 | 0.8069 | 31.6163 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
ad62bbf5c42fe49d45557e5682c09edd
spacy/fr_core_news_md
spacy
null
28
50
spacy
0
token-classification
false
false
false
lgpl-lr
['fr']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
11,827
false
### Details: https://spacy.io/models/fr#fr_core_news_md French pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `fr_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [UD French Sequoia v2.8](https://github.com/UniversalDependencies/UD_French-Sequoia) (Candito, Marie; Seddah, Djamé; Perrier, Guy; Guillaume, Bruno)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[spaCy lookups data](https://github.com/explosion/spacy-lookups-data) (Explosion)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `LGPL-LR` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (237 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Ord\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Number=Sing\|POS=DET\|Poss=Yes`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|Number=Plur\|POS=ADP\|PronType=Art`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|PronType=Int`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=DET\|Poss=Yes`, `POS=AUX\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `POS=PRON\|Person=3\|Reflex=Yes`, `Gender=Masc\|POS=NOUN`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PRON\|Person=3`, `Number=Plur\|POS=NOUN`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=3`, `Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes`, `Gender=Masc\|POS=PRON`, `POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=PRON`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=PRON`, `POS=NUM`, `Gender=Fem\|POS=NOUN`, `POS=SPACE`, `Gender=Fem\|Number=Plur\|POS=PRON`, `Number=Plur\|POS=PRON\|Person=3`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Number=Plur\|POS=PRON\|Person=2`, `NumType=Card\|POS=PRON`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `NumType=Card\|POS=NOUN`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Gender=Fem\|Number=Sing\|POS=DET`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|POS=PRON`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `POS=SYM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `POS=DET`, `Gender=Masc\|Number=Plur\|POS=PRON`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Plur\|POS=DET`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|Reflex=Yes`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=1\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Masc\|NumType=Card\|POS=NUM` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux:pass`, `aux:tense`, `case`, `cc`, `ccomp`, `conj`, `cop`, `dep`, `det`, `expl:comp`, `expl:pass`, `expl:subj`, `fixed`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl:agent`, `obl:arg`, `obl:mod`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.80 | | `TOKEN_P` | 98.44 | | `TOKEN_R` | 98.96 | | `TOKEN_F` | 98.70 | | `POS_ACC` | 97.37 | | `MORPH_ACC` | 96.49 | | `MORPH_MICRO_P` | 98.68 | | `MORPH_MICRO_R` | 97.98 | | `MORPH_MICRO_F` | 98.33 | | `SENTS_P` | 88.19 | | `SENTS_R` | 89.46 | | `SENTS_F` | 88.51 | | `DEP_UAS` | 89.47 | | `DEP_LAS` | 85.63 | | `TAG_ACC` | 94.51 | | `LEMMA_ACC` | 91.35 | | `ENTS_P` | 83.17 | | `ENTS_R` | 83.23 | | `ENTS_F` | 83.20 |
761df36074a68e3147a1b3bfb467da6b
Ukhushn/distilbert-base-uncased-finetuned-homedepot
Ukhushn
distilbert
13
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,330
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-homedepot This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.9909 | 1.0 | 4688 | 2.5285 | | 2.5495 | 2.0 | 9376 | 2.3476 | | 2.4198 | 3.0 | 14064 | 2.2841 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
953a66adbbc89401c454850b768f05bc
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s682
jonatasgrosman
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
497
false
# exp_w2v2r_en_vp-100k_age_teens-5_sixties-5_s682 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
835026ef7e302ea9fb03e50d9ff718c1
anas-awadalla/bart-base-finetuned-squad-infilling
anas-awadalla
bart
74
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
944
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-squad-infilling This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
ccee399242f54a944a82637b5f3b7714