repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
google/tapas-mini
google
tapas
8
4
transformers
0
feature-extraction
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['tapas', 'TapasModel']
false
true
true
4,607
false
# TAPAS mini model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_mini` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task. ## Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01. ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
bd4e8bca526ab71979d30b03e480df87
Helsinki-NLP/opus-mt-en-iir
Helsinki-NLP
marian
11
8
transformers
0
translation
true
true
false
apache-2.0
['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
4,150
false
### eng-iir * source group: English * target group: Indo-Iranian languages * OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.7 | 0.326 | | newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 | | newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 | | newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 | | Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 | | Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 | | Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 | | Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 | | Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 | | Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 | | Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 | | Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 | | Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 | | Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 | | Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 | | Tatoeba-test.eng.multi | 13.7 | 0.392 | | Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 | | Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 | | Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 | | Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 | | Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 | | Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 | | Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 | | Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 | | Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 | | Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 | | Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 | ### System Info: - hf_name: eng-iir - source_languages: eng - target_languages: iir - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: iir - short_pair: en-iir - chrF2_score: 0.392 - bleu: 13.7 - brevity_penalty: 1.0 - ref_len: 63351.0 - src_name: English - tgt_name: Indo-Iranian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: iir - prefer_old: False - long_pair: eng-iir - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
d445c4cef1420bf2b922b2116901045b
KoichiYasuoka/roberta-base-coptic-ud-goeswith
KoichiYasuoka
roberta
10
5
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['cop']
['universal_dependencies']
null
0
0
0
0
0
0
0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
2,735
false
# roberta-base-coptic-ud-goeswith ## Model Description This is a RoBERTa model pre-trained on Coptic Scriptorium Corpora for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-coptic](https://huggingface.co/KoichiYasuoka/roberta-base-coptic). ## How to Use ```py class UDgoeswith(object): def __init__(self,bert): from transformers import AutoTokenizer,AutoModelForTokenClassification self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForTokenClassification.from_pretrained(bert) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=self.tokenizer(text,return_offsets_mapping=True) v=w["input_ids"] x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)] with torch.no_grad(): e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:] r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())] e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan) g=self.model.config.label2id["X|_|goeswith"] r=numpy.tri(e.shape[0]) for i in range(e.shape[0]): for j in range(i+2,e.shape[1]): r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1 e[:,:,g]+=numpy.where(r==0,0,numpy.nan) m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan) m[1:,1:]=numpy.nanmax(e,axis=2).transpose() p=numpy.zeros(m.shape) p[1:,1:]=numpy.nanargmax(e,axis=2).transpose() for i in range(1,m.shape[0]): m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan) m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="# text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-base-coptic-ud-goeswith") print(nlp("ⲧⲉⲛⲟⲩⲇⲉⲛ̄ⲟⲩⲟⲉⲓⲛϩ︤ⲙ︥ⲡϫⲟⲉⲓⲥ·")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-coptic-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("ⲧⲉⲛⲟⲩⲇⲉⲛ̄ⲟⲩⲟⲉⲓⲛϩ︤ⲙ︥ⲡϫⲟⲉⲓⲥ·")) ```
b71d74a7c2465a49abdabba350e05911
edmz/distilbert-base-uncased-finetuned-ner
edmz
distilbert
13
6
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,555
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 - Precision: 0.9247 - Recall: 0.9385 - F1: 0.9315 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2421 | 1.0 | 878 | 0.0701 | 0.9083 | 0.9217 | 0.9149 | 0.9801 | | 0.0555 | 2.0 | 1756 | 0.0599 | 0.9204 | 0.9357 | 0.9280 | 0.9830 | | 0.0311 | 3.0 | 2634 | 0.0612 | 0.9247 | 0.9385 | 0.9315 | 0.9837 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
6f5a3d47d721af92a7360e112167786b
andreaparker/t5-small-finetuned-xsum
andreaparker
t5
9
4
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,255
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 244 | 2.6029 | 29.4956 | 13.5156 | 25.8306 | 25.842 | 18.2896 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
ae028cfe1464b0f8732d8ec474719daa
jonatasgrosman/exp_w2v2t_zh-cn_vp-es_s399
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['zh-CN']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'zh-CN']
false
true
true
475
false
# exp_w2v2t_zh-cn_vp-es_s399 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
52d135a63edd299c59fd39fe8baa5790
dbmdz/electra-base-turkish-mc4-cased-generator
dbmdz
electra
7
2
transformers
0
fill-mask
true
true
false
mit
['tr']
['allenai/c4']
null
0
0
0
0
0
0
0
[]
false
true
true
2,532
false
# 🇹🇷 Turkish ELECTRA model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann). # Stats We've also trained an ELECTRA (cased) model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one). # mC4 ELECTRA In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU. # Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator") model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator") ``` # Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ``` # Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
265f80c4dd2781f75977882d49737c55
xrverse/distilbert-base-uncased-distilled-clinc
xrverse
distilbert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,786
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.0332 - Accuracy: 0.9303 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4409 | 1.0 | 318 | 0.2288 | 0.6206 | | 0.1898 | 2.0 | 636 | 0.1106 | 0.8461 | | 0.116 | 3.0 | 954 | 0.0729 | 0.8994 | | 0.0861 | 4.0 | 1272 | 0.0548 | 0.9097 | | 0.0707 | 5.0 | 1590 | 0.0454 | 0.9184 | | 0.0613 | 6.0 | 1908 | 0.0399 | 0.9239 | | 0.0557 | 7.0 | 2226 | 0.0371 | 0.9294 | | 0.0522 | 8.0 | 2544 | 0.0348 | 0.93 | | 0.05 | 9.0 | 2862 | 0.0336 | 0.9297 | | 0.0487 | 10.0 | 3180 | 0.0332 | 0.9303 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 2.4.0 - Tokenizers 0.10.3
9f20eb8cdf384db44961e3d464086334
ajitjadhav/t5-small-finetuned-summarization-app
ajitjadhav
t5
13
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['cnn_dailymail']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,545
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-summarization-app This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6614 - Rouge1: 24.5589 - Rouge2: 11.8509 - Rougel: 20.3011 - Rougelsum: 23.1768 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.8267 | 1.0 | 23927 | 1.6689 | 24.4634 | 11.7413 | 20.2154 | 23.0875 | 18.9993 | | 1.81 | 2.0 | 47854 | 1.6614 | 24.5589 | 11.8509 | 20.3011 | 23.1768 | 19.0 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
44c0871f6f0570980ddaf7edb166dca0
Imene/vit-base-patch16-224-in21k-wi2
Imene
vit
7
2
transformers
0
image-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,443
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Imene/vit-base-patch16-224-in21k-wi2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9892 - Train Accuracy: 0.5568 - Train Top-3-accuracy: 0.8130 - Validation Loss: 3.0923 - Validation Accuracy: 0.4280 - Validation Top-3-accuracy: 0.7034 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 3.8488 | 0.0720 | 0.1713 | 3.7116 | 0.1564 | 0.3617 | 0 | | 3.5246 | 0.2703 | 0.4898 | 3.4122 | 0.3217 | 0.5732 | 1 | | 3.2493 | 0.4150 | 0.6827 | 3.2232 | 0.3880 | 0.6633 | 2 | | 3.0840 | 0.5002 | 0.7670 | 3.1275 | 0.4255 | 0.6921 | 3 | | 2.9892 | 0.5568 | 0.8130 | 3.0923 | 0.4280 | 0.7034 | 4 | ### Framework versions - Transformers 4.21.3 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
2484e9a21bb2044e0823013cb23068c4
kpriyanshu256/whisper-large-v2-as-600-32-1e-05-pretrain-bn
kpriyanshu256
whisper
15
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['as']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,596
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn-Assamese This model is a fine-tuned version of [kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn](https://huggingface.co/kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2637 - Wer: 21.6928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1915 | 1.1 | 50 | 0.2129 | 26.3851 | | 0.0639 | 3.06 | 100 | 0.2305 | 23.0825 | | 0.0192 | 5.03 | 150 | 0.2391 | 22.0538 | | 0.0041 | 6.13 | 200 | 0.2637 | 21.6928 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
075e489da2b351c3bc897f8e925e5d86
hsge/TESS_768_v1
hsge
albert
8
210
transformers
0
null
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
935
false
<h1>Transformer Encoder for Social Science (TESS)</h1> TESS is a deep neural network model intended for social science related NLP tasks. The model is developed by Haosen Ge, In Young Park, Xuancheng Qian, and Grace Zeng. We demonstrate in two validation tests that TESS outperforms BERT and RoBERTa by 16.7\% on average, especially when the number of training samples is limited (<1,000 training instances). The results display the superiority of TESS on social science text processing tasks. GitHub: [TESS](https://github.com/haosenge/TESS). <h2>Training Corpus</h2> | TEXT | SOURCE | | ------------- | ------------- | | Preferential Trade Agreements | ToTA | | Congressional Bills | Kornilova and Eidelman (2019) | |UNGA Resolutions | UN | |Firms' Annual Reports | Loughran and McDonald (2016)| | U.S. Court Opinions | Caselaw Access Project| The model is trained on 4 NVIDIA A100 GPUs for 120K steps.
da58b1e3168cb7f1e11ab1961216e5e1
Kumicho/distilbert-base-uncased-finetuned-cola
Kumicho
distilbert
15
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,276
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7758 - Matthews Correlation: 0.5259 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
70bf2a2871e194a10620ccf3d3ff5135
Go2Heart/BERT_Mod_7_Squad
Go2Heart
bert
10
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,247
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_Mod_7_Squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.0928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.189 | 1.0 | 4089 | 1.2196 | | 1.0312 | 2.0 | 8178 | 1.0691 | | 0.8954 | 3.0 | 12267 | 1.0928 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 1.17.0 - Tokenizers 0.12.1
706d90f981059e57b6eda9a8e5eabf16
jonatasgrosman/exp_w2v2t_es_unispeech-ml_s952
jonatasgrosman
unispeech
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'es']
false
true
true
500
false
# exp_w2v2t_es_unispeech-ml_s952 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
59e537cee6e9d87ad72f88aa56c25daf
fathyshalab/all-roberta-large-v1-work-4-16-5-oos
fathyshalab
roberta
11
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,513
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-work-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 | | 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 | | 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 | | 1.5539 | 4.0 | 4 | 2.3874 | 0.36 | | 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
9bb58e01dc001e4e7bac81d6dcdd8fd5
Theivaprakasham/wav2vec2-base-timit-demo-colab
Theivaprakasham
wav2vec2
12
9
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,641
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4475 - Wer: 0.3400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6929 | 4.0 | 500 | 2.4485 | 1.0009 | | 0.9441 | 8.0 | 1000 | 0.4848 | 0.4758 | | 0.3016 | 12.0 | 1500 | 0.4464 | 0.4016 | | 0.1715 | 16.0 | 2000 | 0.4666 | 0.3765 | | 0.1277 | 20.0 | 2500 | 0.4340 | 0.3515 | | 0.1082 | 24.0 | 3000 | 0.4544 | 0.3495 | | 0.0819 | 28.0 | 3500 | 0.4475 | 0.3400 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
db4c9ce7101cea4cd8f9d8fe11107bbf
nateraw/vit-base-beans-demo-v3
nateraw
vit
14
13
transformers
0
image-classification
true
false
false
apache-2.0
null
['beans']
null
1
1
0
0
0
0
0
['image-classification', 'other-image-classification', 'generated_from_trainer']
true
true
true
1,276
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0645 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0397 | 1.54 | 100 | 0.0645 | 0.9850 | ### Framework versions - Transformers 4.10.0.dev0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
1fa52ca9527685386c01403ea314ceb5
paola-md/recipe-lr8e06-wd0.01-bs32
paola-md
roberta
6
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,701
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-lr8e06-wd0.01-bs32 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2753 - Rmse: 0.5246 - Mse: 0.2753 - Mae: 0.4184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2769 | 1.0 | 623 | 0.2774 | 0.5266 | 0.2774 | 0.4296 | | 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4145 | | 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 | | 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 | | 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 | | 0.2705 | 6.0 | 3738 | 0.2753 | 0.5246 | 0.2753 | 0.4184 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
545aa4d2645d6b1d1adc946cd5178a39
gchhablani/fnet-large-finetuned-cola-copy4
gchhablani
fnet
71
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,409
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-cola-copy4 This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6500 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6345 | 1.0 | 2138 | 0.6611 | 0.0 | | 0.6359 | 2.0 | 4276 | 0.6840 | 0.0 | | 0.6331 | 3.0 | 6414 | 0.6500 | 0.0 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
822664923da3664f52979a3c1e05900f
nandysoham16/Canadian_Armed_Forces-clustered
nandysoham16
distilbert
8
10
transformers
0
question-answering
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,874
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nandysoham16/Canadian_Armed_Forces-clustered This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5493 - Train End Logits Accuracy: 0.8611 - Train Start Logits Accuracy: 0.7812 - Validation Loss: 0.3839 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 0.8000 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.5493 | 0.8611 | 0.7812 | 0.3839 | 1.0 | 0.8000 | 0 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
41b8da9d31c5ee09f78d33daf8fcb654
jonatasgrosman/exp_w2v2t_th_vp-100k_s403
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['th']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'th']
false
true
true
478
false
# exp_w2v2t_th_vp-100k_s403 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
bd5a3994f721d2a4375bad3d8b4e07f4
svo2/roberta-finetuned-country-neg
svo2
roberta
13
14
transformers
0
question-answering
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
986
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-country-neg This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
08131fbd2a07f50ba4ac8a04128924e3
mrm8488/ddpm-ema-anime-256
mrm8488
null
9
8
diffusers
1
null
false
false
false
apache-2.0
['en']
['huggan/selfie2anime']
null
0
0
0
0
0
0
0
[]
false
true
true
1,338
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-ema-anime-256 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/selfie2anime` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/mrm8488/ddpm-ema-anime-256/tensorboard?#scalars) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Q Blocks](https://www.qblocks.cloud/)
484015960c20caaeb5d6c22329b1df78
edugp/wav2vec2-xls-r-300m-cv8-es
edugp
wav2vec2
11
12
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,286
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-cv8-es This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2115 - eval_wer: 0.1931 - eval_runtime: 859.964 - eval_samples_per_second: 17.954 - eval_steps_per_second: 2.244 - epoch: 6.97 - step: 50000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
558aeea427b868ecbbb1cd306db18f1c
Ankit15nov/xlm-roberta-base-finetuned-panx-de
Ankit15nov
xlm-roberta
12
7
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,313
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1368 - F1: 0.8599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2618 | 1.0 | 525 | 0.1748 | 0.8134 | | 0.1274 | 2.0 | 1050 | 0.1398 | 0.8461 | | 0.0817 | 3.0 | 1575 | 0.1368 | 0.8599 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.5.1 - Datasets 1.16.1 - Tokenizers 0.10.3
94dbfd8aeefb96db6c9ca2a44dbcca8a
caffsean/gpt2-dzongkha-text
caffsean
gpt2
17
3
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,219
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-dzongkha-text This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 249 | 3.1538 | | No log | 2.0 | 498 | 2.6796 | | 4.0415 | 3.0 | 747 | 2.5939 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
1fd7b84344b6557e53378da941470099
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-8_sixties-2_s284
jonatasgrosman
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'es']
false
true
true
497
false
# exp_w2v2r_es_vp-100k_age_teens-8_sixties-2_s284 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5264bedb86505039612e0d9d89301899
AlekseyKorshuk/dalio-all-io-125m-3-epoch
AlekseyKorshuk
opt
13
5
transformers
0
text-generation
true
false
false
other
null
['AlekseyKorshuk/dalio-all-io']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
6,666
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dalio-all-io-125m-3-epoch This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the AlekseyKorshuk/dalio-all-io dataset. It achieves the following results on the evaluation set: - Loss: 2.7656 - Accuracy: 0.0497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.1406 | 0.03 | 1 | 3.0762 | 0.0451 | | 3.074 | 0.07 | 2 | 3.0762 | 0.0451 | | 3.0557 | 0.1 | 3 | 3.0762 | 0.0451 | | 3.2166 | 0.14 | 4 | 3.0176 | 0.0457 | | 3.0989 | 0.17 | 5 | 2.9922 | 0.0460 | | 3.0732 | 0.21 | 6 | 2.9746 | 0.0464 | | 3.0867 | 0.24 | 7 | 2.9629 | 0.0463 | | 2.979 | 0.28 | 8 | 2.9512 | 0.0467 | | 3.1838 | 0.31 | 9 | 2.9414 | 0.0467 | | 2.9399 | 0.34 | 10 | 2.9336 | 0.0467 | | 2.926 | 0.38 | 11 | 2.9258 | 0.0471 | | 3.2144 | 0.41 | 12 | 2.9199 | 0.0473 | | 2.978 | 0.45 | 13 | 2.9141 | 0.0474 | | 3.0076 | 0.48 | 14 | 2.9082 | 0.0476 | | 2.9897 | 0.52 | 15 | 2.9023 | 0.0477 | | 2.8831 | 0.55 | 16 | 2.8945 | 0.0479 | | 2.9749 | 0.59 | 17 | 2.8867 | 0.0479 | | 2.9431 | 0.62 | 18 | 2.8828 | 0.0478 | | 3.0498 | 0.66 | 19 | 2.8770 | 0.0479 | | 2.9409 | 0.69 | 20 | 2.8711 | 0.0479 | | 2.96 | 0.72 | 21 | 2.8672 | 0.0480 | | 3.0767 | 0.76 | 22 | 2.8633 | 0.0478 | | 2.772 | 0.79 | 23 | 2.8594 | 0.0479 | | 3.0574 | 0.83 | 24 | 2.8535 | 0.0480 | | 2.8137 | 0.86 | 25 | 2.8496 | 0.0480 | | 2.8872 | 0.9 | 26 | 2.8438 | 0.0483 | | 3.0085 | 0.93 | 27 | 2.8398 | 0.0484 | | 2.9165 | 0.97 | 28 | 2.8359 | 0.0485 | | 2.8525 | 1.0 | 29 | 2.8340 | 0.0486 | | 2.7759 | 1.03 | 30 | 2.8301 | 0.0485 | | 2.7312 | 1.07 | 31 | 2.8281 | 0.0485 | | 2.6641 | 1.1 | 32 | 2.8262 | 0.0487 | | 2.7896 | 1.14 | 33 | 2.8242 | 0.0486 | | 2.7878 | 1.17 | 34 | 2.8223 | 0.0487 | | 2.4028 | 1.21 | 35 | 2.8203 | 0.0487 | | 2.5618 | 1.24 | 36 | 2.8184 | 0.0488 | | 2.6697 | 1.28 | 37 | 2.8164 | 0.0488 | | 2.6333 | 1.31 | 38 | 2.8145 | 0.0487 | | 2.4897 | 1.34 | 39 | 2.8125 | 0.0486 | | 2.4908 | 1.38 | 40 | 2.8105 | 0.0487 | | 2.6926 | 1.41 | 41 | 2.8086 | 0.0488 | | 2.6602 | 1.45 | 42 | 2.8066 | 0.0489 | | 2.8054 | 1.48 | 43 | 2.8047 | 0.0489 | | 2.5532 | 1.52 | 44 | 2.8047 | 0.0490 | | 2.4756 | 1.55 | 45 | 2.8027 | 0.0491 | | 2.6123 | 1.59 | 46 | 2.8008 | 0.0491 | | 2.5117 | 1.62 | 47 | 2.7988 | 0.0490 | | 2.5552 | 1.66 | 48 | 2.7969 | 0.0490 | | 2.5122 | 1.69 | 49 | 2.7949 | 0.0490 | | 2.5593 | 1.72 | 50 | 2.7930 | 0.0491 | | 2.5759 | 1.76 | 51 | 2.7910 | 0.0491 | | 2.5535 | 1.79 | 52 | 2.7891 | 0.0493 | | 2.6531 | 1.83 | 53 | 2.7871 | 0.0494 | | 2.5701 | 1.86 | 54 | 2.7852 | 0.0495 | | 2.6621 | 1.9 | 55 | 2.7832 | 0.0497 | | 2.532 | 1.93 | 56 | 2.7812 | 0.0496 | | 2.5928 | 1.97 | 57 | 2.7793 | 0.0497 | | 2.5486 | 2.0 | 58 | 2.7754 | 0.0497 | | 2.5009 | 2.03 | 59 | 2.7734 | 0.0497 | | 2.4346 | 2.07 | 60 | 2.7734 | 0.0498 | | 2.3259 | 2.1 | 61 | 2.7715 | 0.0497 | | 2.3569 | 2.14 | 62 | 2.7695 | 0.0498 | | 2.5898 | 2.17 | 63 | 2.7695 | 0.0498 | | 2.3657 | 2.21 | 64 | 2.7676 | 0.0498 | | 2.4875 | 2.24 | 65 | 2.7676 | 0.0498 | | 2.4392 | 2.28 | 66 | 2.7676 | 0.0497 | | 2.3595 | 2.31 | 67 | 2.7656 | 0.0497 | | 2.4757 | 2.34 | 68 | 2.7656 | 0.0498 | | 2.4617 | 2.38 | 69 | 2.7656 | 0.0498 | | 2.3376 | 2.41 | 70 | 2.7656 | 0.0499 | | 2.3129 | 2.45 | 71 | 2.7656 | 0.0498 | | 2.5703 | 2.48 | 72 | 2.7656 | 0.0498 | | 2.3491 | 2.52 | 73 | 2.7656 | 0.0498 | | 2.3484 | 2.55 | 74 | 2.7656 | 0.0498 | | 2.3782 | 2.59 | 75 | 2.7656 | 0.0497 | | 2.4033 | 2.62 | 76 | 2.7656 | 0.0498 | | 2.3821 | 2.66 | 77 | 2.7656 | 0.0498 | | 2.39 | 2.69 | 78 | 2.7656 | 0.0498 | | 2.3984 | 2.72 | 79 | 2.7656 | 0.0497 | | 2.3936 | 2.76 | 80 | 2.7656 | 0.0498 | | 2.4414 | 2.79 | 81 | 2.7656 | 0.0497 | | 2.4727 | 2.83 | 82 | 2.7656 | 0.0497 | | 2.3192 | 2.86 | 83 | 2.7656 | 0.0497 | | 2.4365 | 2.9 | 84 | 2.7656 | 0.0497 | | 2.5042 | 2.93 | 85 | 2.7656 | 0.0497 | | 2.4746 | 2.97 | 86 | 2.7656 | 0.0497 | | 2.5383 | 3.0 | 87 | 2.7656 | 0.0497 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
5da301c5fd8b621838900407730b2b73
ViktorDo/DistilBERT-POWO_MGH_Growth_Form_Finetuned
ViktorDo
distilbert
12
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilBERT-POWO_MGH_Growth_Form_Finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2182 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2379 | 1.0 | 2054 | 0.2241 | | 0.2098 | 2.0 | 4108 | 0.2173 | | 0.2168 | 3.0 | 6162 | 0.2182 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
074d3fc9008afa8dce4bb7fa9002bb0e
gokuls/distilbert_add_GLUE_Experiment_logit_kd_rte_96
gokuls
distilbert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,121
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_logit_kd_rte_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.4234 - Accuracy: 0.4729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4604 | 1.0 | 10 | 0.4429 | 0.4729 | | 0.4358 | 2.0 | 20 | 0.4328 | 0.4729 | | 0.4282 | 3.0 | 30 | 0.4290 | 0.4729 | | 0.4246 | 4.0 | 40 | 0.4269 | 0.4729 | | 0.4227 | 5.0 | 50 | 0.4252 | 0.4729 | | 0.4204 | 6.0 | 60 | 0.4243 | 0.4729 | | 0.4191 | 7.0 | 70 | 0.4238 | 0.4729 | | 0.4185 | 8.0 | 80 | 0.4235 | 0.4729 | | 0.4175 | 9.0 | 90 | 0.4234 | 0.4729 | | 0.4164 | 10.0 | 100 | 0.4235 | 0.4729 | | 0.418 | 11.0 | 110 | 0.4236 | 0.4729 | | 0.4169 | 12.0 | 120 | 0.4236 | 0.4729 | | 0.4173 | 13.0 | 130 | 0.4238 | 0.4729 | | 0.4168 | 14.0 | 140 | 0.4239 | 0.4729 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
4b7d904c42b406940a2a0e2d4f60bc83
kasrahabib/20_propogated
kasrahabib
bert
10
0
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,915
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kasrahabib/20_propogated This model is a fine-tuned version of [kasrahabib/XXX08_02_23__-bucket-finetunned](https://huggingface.co/kasrahabib/XXX08_02_23__-bucket-finetunned) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0504 - Validation Loss: 0.1528 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2492 | 0.1740 | 0 | | 0.1527 | 0.1501 | 1 | | 0.1092 | 0.1582 | 2 | | 0.0879 | 0.1568 | 3 | | 0.0774 | 0.1577 | 4 | | 0.0689 | 0.1513 | 5 | | 0.0597 | 0.1598 | 6 | | 0.0600 | 0.1536 | 7 | | 0.0526 | 0.1519 | 8 | | 0.0504 | 0.1528 | 9 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
e73adce7c14477b566a3b199a71e9eaf
alk/t5-small-finetuned-cnn_dailymail-en-es
alk
t5
8
1
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,465
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # alk/t5-small-finetuned-cnn_dailymail-en-es This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9163 - Validation Loss: 1.7610 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 71776, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9945 | 1.7837 | 0 | | 1.9478 | 1.7694 | 1 | | 1.9278 | 1.7646 | 2 | | 1.9163 | 1.7610 | 3 | ### Framework versions - Transformers 4.19.0 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
b0c6ca3b8b5b236aebce9f7971e33c9c
muhtasham/tiny-mlm-glue-mnli-target-glue-qnli
muhtasham
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,806
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mnli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mnli](https://huggingface.co/muhtasham/tiny-mlm-glue-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4695 - Accuracy: 0.7814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6034 | 0.15 | 500 | 0.5431 | 0.7335 | | 0.5403 | 0.31 | 1000 | 0.5253 | 0.7459 | | 0.5174 | 0.46 | 1500 | 0.4953 | 0.7659 | | 0.5137 | 0.61 | 2000 | 0.5259 | 0.7483 | | 0.511 | 0.76 | 2500 | 0.4814 | 0.7750 | | 0.5032 | 0.92 | 3000 | 0.4670 | 0.7847 | | 0.4901 | 1.07 | 3500 | 0.4525 | 0.7904 | | 0.4798 | 1.22 | 4000 | 0.4679 | 0.7836 | | 0.4667 | 1.37 | 4500 | 0.4752 | 0.7798 | | 0.4736 | 1.53 | 5000 | 0.4695 | 0.7814 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
ecd412a5e409636c916fbbe9ec5cd199
ZabonZooY/BasilticAbyssDream
ZabonZooY
null
56
0
null
2
null
false
false
false
unlicense
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
666
false
#BasilticAbyssDream >BAD에는 종류가 3가지 있습니다. 원하는대로 다운받으세요. 참고 사진은 샘플 파일에 올려두었습니다. >BAD model has 3 types. Choose any model what you like. Sample pics included. #권장사항 (Recommend) * 가장 추천되는 모델은 BAD 0.3입니다. * BA 0.1이 가장 반실사에 가깝고, BAD 0.5는 매우 실사스럽고 드림 특유의 뭉개짐이 많습니다. * 권장 프롬프트 : detailed face, restore face * 권장 네거티브 : (worst quality, low quality:1.4), (loli, child, infant, baby:1.3), accessories * Most recommended model is BAD 0.3. * BA 0.1 likes semi-realistic, BAD 0.5 is very realistic but it has many errors. * Prompts recommended : detailed face, restore face * Negative recommended : (worst quality, low quality:1.4), (loli, child, infant, baby:1.3), accessories
2f62d35bcd16732232844275fa734db1
RawMean/model_dir
RawMean
deberta-v2
11
3
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,824
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_dir This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0380 - Pearson: 0.9399 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 12 | 0.2773 | 0.7230 | | No log | 2.0 | 24 | 0.1120 | 0.7812 | | No log | 3.0 | 36 | 0.1090 | 0.8638 | | No log | 4.0 | 48 | 0.0613 | 0.9163 | | No log | 5.0 | 60 | 0.0447 | 0.9409 | | No log | 6.0 | 72 | 0.0356 | 0.9402 | | No log | 7.0 | 84 | 0.0368 | 0.9359 | | No log | 8.0 | 96 | 0.0408 | 0.9295 | | No log | 9.0 | 108 | 0.0397 | 0.9382 | | No log | 10.0 | 120 | 0.0380 | 0.9399 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
55698d38152f764b0d345f2c541c9056
semindan/xnli_xlm_r_base_broken
semindan
xlm-roberta
10
1
transformers
0
text-classification
true
false
false
mit
null
['xnli']
null
0
0
0
0
0
0
0
['text-classification', 'generated_from_trainer']
true
true
true
5,754
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xnli_xlm_r_base_only_en_automodel_single_gpu This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xnli dataset. It achieves the following results on the evaluation set: - Loss: 1.0986 - Accuracy: 0.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.1064 | 0.04 | 1000 | 1.1003 | 0.3333 | | 1.1042 | 0.08 | 2000 | 1.1006 | 0.3333 | | 1.1049 | 0.12 | 3000 | 1.0992 | 0.3333 | | 1.1037 | 0.16 | 4000 | 1.1019 | 0.3333 | | 1.1037 | 0.2 | 5000 | 1.0986 | 0.3333 | | 1.1028 | 0.24 | 6000 | 1.1014 | 0.3333 | | 1.1044 | 0.29 | 7000 | 1.1059 | 0.3333 | | 1.102 | 0.33 | 8000 | 1.1000 | 0.3333 | | 1.1022 | 0.37 | 9000 | 1.1012 | 0.3333 | | 1.1019 | 0.41 | 10000 | 1.0995 | 0.3333 | | 1.1018 | 0.45 | 11000 | 1.0990 | 0.3333 | | 1.103 | 0.49 | 12000 | 1.1018 | 0.3333 | | 1.1016 | 0.53 | 13000 | 1.0989 | 0.3333 | | 1.1021 | 0.57 | 14000 | 1.0995 | 0.3333 | | 1.1012 | 0.61 | 15000 | 1.1026 | 0.3333 | | 1.1012 | 0.65 | 16000 | 1.1000 | 0.3333 | | 1.1018 | 0.69 | 17000 | 1.0992 | 0.3333 | | 1.1004 | 0.73 | 18000 | 1.0996 | 0.3333 | | 1.101 | 0.77 | 19000 | 1.0987 | 0.3333 | | 1.1011 | 0.81 | 20000 | 1.1001 | 0.3333 | | 1.1006 | 0.86 | 21000 | 1.0991 | 0.3333 | | 1.1006 | 0.9 | 22000 | 1.1028 | 0.3333 | | 1.1003 | 0.94 | 23000 | 1.0988 | 0.3333 | | 1.1006 | 0.98 | 24000 | 1.0987 | 0.3333 | | 1.1008 | 1.02 | 25000 | 1.0995 | 0.3333 | | 1.1011 | 1.06 | 26000 | 1.0987 | 0.3333 | | 1.1003 | 1.1 | 27000 | 1.0987 | 0.3333 | | 1.1002 | 1.14 | 28000 | 1.1020 | 0.3333 | | 1.1 | 1.18 | 29000 | 1.0988 | 0.3333 | | 1.1002 | 1.22 | 30000 | 1.0995 | 0.3333 | | 1.1001 | 1.26 | 31000 | 1.0989 | 0.3333 | | 1.1001 | 1.3 | 32000 | 1.0986 | 0.3333 | | 1.0999 | 1.34 | 33000 | 1.0989 | 0.3333 | | 1.1004 | 1.39 | 34000 | 1.0987 | 0.3333 | | 1.0993 | 1.43 | 35000 | 1.0989 | 0.3333 | | 1.1003 | 1.47 | 36000 | 1.0989 | 0.3333 | | 1.0999 | 1.51 | 37000 | 1.0991 | 0.3333 | | 1.0999 | 1.55 | 38000 | 1.0993 | 0.3333 | | 1.0994 | 1.59 | 39000 | 1.0993 | 0.3333 | | 1.0994 | 1.63 | 40000 | 1.0989 | 0.3333 | | 1.0999 | 1.67 | 41000 | 1.0988 | 0.3333 | | 1.0995 | 1.71 | 42000 | 1.0996 | 0.3333 | | 1.1003 | 1.75 | 43000 | 1.0987 | 0.3333 | | 1.0996 | 1.79 | 44000 | 1.0987 | 0.3333 | | 1.0996 | 1.83 | 45000 | 1.0990 | 0.3333 | | 1.0994 | 1.87 | 46000 | 1.0990 | 0.3333 | | 1.0992 | 1.91 | 47000 | 1.1000 | 0.3333 | | 1.0992 | 1.96 | 48000 | 1.0989 | 0.3333 | | 1.0991 | 2.0 | 49000 | 1.0991 | 0.3333 | | 1.099 | 2.04 | 50000 | 1.0987 | 0.3333 | | 1.0992 | 2.08 | 51000 | 1.0987 | 0.3333 | | 1.0995 | 2.12 | 52000 | 1.0988 | 0.3333 | | 1.0994 | 2.16 | 53000 | 1.0989 | 0.3333 | | 1.0994 | 2.2 | 54000 | 1.0989 | 0.3333 | | 1.0993 | 2.24 | 55000 | 1.0988 | 0.3333 | | 1.0988 | 2.28 | 56000 | 1.0986 | 0.3333 | | 1.0995 | 2.32 | 57000 | 1.0986 | 0.3333 | | 1.0991 | 2.36 | 58000 | 1.0988 | 0.3333 | | 1.0989 | 2.4 | 59000 | 1.0987 | 0.3333 | | 1.0991 | 2.44 | 60000 | 1.0990 | 0.3333 | | 1.0992 | 2.49 | 61000 | 1.0989 | 0.3333 | | 1.0992 | 2.53 | 62000 | 1.0987 | 0.3333 | | 1.0989 | 2.57 | 63000 | 1.0986 | 0.3333 | | 1.099 | 2.61 | 64000 | 1.0987 | 0.3333 | | 1.0991 | 2.65 | 65000 | 1.0986 | 0.3333 | | 1.0991 | 2.69 | 66000 | 1.0986 | 0.3333 | | 1.0991 | 2.73 | 67000 | 1.0987 | 0.3333 | | 1.0986 | 2.77 | 68000 | 1.0987 | 0.3333 | | 1.0992 | 2.81 | 69000 | 1.0986 | 0.3333 | | 1.0989 | 2.85 | 70000 | 1.0986 | 0.3333 | | 1.099 | 2.89 | 71000 | 1.0987 | 0.3333 | | 1.0989 | 2.93 | 72000 | 1.0986 | 0.3333 | | 1.0989 | 2.97 | 73000 | 1.0986 | 0.3333 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
92d75fdfaf617936b39ed96bb62a470c
WillHeld/t5-base-pointer-mtop
WillHeld
mt5
17
3
transformers
0
text2text-generation
true
false
false
apache-2.0
['en']
['mtop']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,184
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-pointer-mtop This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mtop dataset. It achieves the following results on the evaluation set: - Loss: 0.1131 - Exact Match: 0.7199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 1.7749 | 6.65 | 200 | 0.5892 | 0.0031 | | 0.6021 | 13.33 | 400 | 0.5160 | 0.0139 | | 0.6044 | 19.98 | 600 | 0.4080 | 0.0532 | | 0.3302 | 26.65 | 800 | 0.1865 | 0.3620 | | 0.1483 | 33.33 | 1000 | 0.1267 | 0.5105 | | 0.0768 | 39.98 | 1200 | 0.1131 | 0.5298 | | 0.0525 | 46.65 | 1400 | 0.1219 | 0.5414 | | 0.0801 | 53.33 | 1600 | 0.1186 | 0.5275 | | 0.0331 | 59.98 | 1800 | 0.1306 | 0.5423 | | 0.0254 | 66.65 | 2000 | 0.1396 | 0.5396 | | 0.0168 | 73.33 | 2200 | 0.1560 | 0.5436 | | 0.0129 | 79.98 | 2400 | 0.1659 | 0.5494 | | 0.0105 | 86.65 | 2600 | 0.1699 | 0.5423 | | 0.0088 | 93.33 | 2800 | 0.1742 | 0.5472 | | 0.0077 | 99.98 | 3000 | 0.1775 | 0.5468 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
a74cd0dd0d4f6c398ffc8f07901ac0cf
surajjoshi/swin-tiny-patch4-window7-224-finetuned-brainTumorData
surajjoshi
swin
45
7
transformers
1
image-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,063
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-brainTumorData This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
8dd1bdbc466d45ca185d2522be9a3942
comodoro/wav2vec2-xls-r-300m-west-slavic-cv8
comodoro
wav2vec2
12
30
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['cs', 'hsb', 'pl', 'sk', 'sl']
['mozilla-foundation/common_voice_8_0']
null
1
1
0
0
0
0
0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
true
true
true
1,241
false
# wav2vec2-xls-r-300m-west-slavic-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled. Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set. ### Evaluation script ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config {lang} ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
69ee2b325c0350105c461ac4978ff6b1
mrm8488/santacoder-finetuned-the-stack-bash-3
mrm8488
gpt2
11
1
transformers
0
text-generation
true
false
false
openrail
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,760
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # santacoder-finetuned-the-stack-bash-3 This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 0.1 | 500 | nan | | 0.0 | 0.2 | 1000 | nan | | 0.0 | 0.3 | 1500 | nan | | 0.0 | 0.4 | 2000 | nan | | 0.0 | 0.5 | 2500 | nan | | 0.0 | 0.6 | 3000 | nan | | 0.0 | 0.7 | 3500 | nan | | 0.0 | 0.8 | 4000 | nan | | 0.0 | 0.9 | 4500 | nan | | 0.0 | 1.0 | 5000 | nan | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
86abdd6f3d8543089fe1f9b2b130645e
deprem-ml/deprem-roberta-intent
deprem-ml
null
11
0
transformers
0
text-classification
false
false
false
apache-2.0
['tr']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,202
false
# Türkçe Multi-label Intent Classification RoBERTa Depremzedelerin ihtiyaçlarını karşılamak için etiketlenmiş eğitilmiş multi-label RoBERTa modeli. Aşağıda değerlendirme sonuçları var. **Evaluation** - 'eval_loss': 0.18568251545368838, - 'eval_runtime': 2.7693, - 'eval_samples_per_second': 254.935, - 'eval_steps_per_second': 8.305, - 'epoch': 3.0 **Classification Report** ``` precision recall f1-score support Alakasiz 0.95 0.87 0.91 781 Barinma 0.86 0.52 0.65 234 Elektronik 0.00 0.00 0.00 171 Giysi 0.89 0.25 0.39 122 Kurtarma 0.86 0.78 0.82 472 Lojistik 0.00 0.00 0.00 123 Saglik 0.78 0.05 0.09 148 Su 0.92 0.11 0.20 96 Yagma 0.00 0.00 0.00 19 Yemek 0.94 0.42 0.58 158 micro avg 0.91 0.55 0.69 2324 macro avg 0.62 0.30 0.36 2324 weighted avg 0.78 0.55 0.61 2324 samples avg 0.69 0.63 0.65 2324 ```
61ef829b9d8de415a15edd5498a97107
austinmw/distilbert-base-uncased-finetuned-health_facts
austinmw
distilbert
50
4
transformers
0
text-classification
true
false
false
apache-2.0
null
['health_fact']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,562
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-health_facts This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the health_fact dataset. It achieves the following results on the evaluation set: - Loss: 1.1227 - Accuracy: 0.6285 - F1: 0.6545 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1367 | 1.0 | 154 | 0.9423 | 0.5560 | 0.6060 | | 0.9444 | 2.0 | 308 | 0.9267 | 0.5733 | 0.6170 | | 0.8248 | 3.0 | 462 | 0.9483 | 0.5832 | 0.6256 | | 0.7213 | 4.0 | 616 | 1.0119 | 0.5815 | 0.6219 | | 0.608 | 5.0 | 770 | 1.1227 | 0.6285 | 0.6545 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
497e9cd499ed7bfa2ba135eda52092d3
classla/roberta-base-frenk-hate
classla
roberta
11
3
transformers
0
text-classification
true
false
false
cc-by-sa-4.0
['en']
null
null
0
0
0
0
0
0
0
['text-classification', 'hate-speech']
false
true
true
4,216
false
# roberta-base-frenk-hate Text classification model based on [`roberta-base`](https://huggingface.co/roberta-base) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the English subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable). ## Fine-tuning hyperparameters Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are: ```python model_args = { "num_train_epochs": 6, "learning_rate": 3e-6, "train_batch_size": 69} ``` ## Performance The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed. | model | average accuracy | average macro F1| |---|---|---| |roberta-base-frenk-hate|0.7915|0.7785| |xlm-roberta-large |0.7904|0.77876| |xlm-roberta-base |0.7577|0.7402| |fasttext|0.725 |0.707 | From recorded accuracies and macro F1 scores p-values were also calculated: Comparison with `xlm-roberta-base`: | test | accuracy p-value | macro F1 p-value| | --- | --- | --- | |Wilcoxon|0.00781|0.00781| |Mann Whithney U-test|0.00108|0.00108| |Student t-test | 1.35e-08 | 1.05e-07| Comparison with `xlm-roberta-large` yielded inconclusive results. `roberta-base` has average accuracy 0.7915, while `xlm-roberta-large` has average accuracy of 0.7904. If macro F1 scores were to be compared, `roberta-base` actually has lower average than `xlm-roberta-large`: 0.77852 vs 0.77876 respectively. The same statistical tests were performed with the premise that `roberta-base` has greater metrics, and the results are given below. | test | accuracy p-value | macro F1 p-value| | --- | --- | --- | |Wilcoxon|0.188|0.406| |Mann Whithey|0.375|0.649| |Student t-test | 0.681| 0.934| With reversed premise (i.e., that `xlm-roberta-large` has greater statistics) the Wilcoxon p-value for macro F1 scores for this case reaches 0.656, Mann-Whithey p-value is 0.399, and of course the Student p-value stays the same. It was therefore concluded that performance of the two models are not statistically significantly different from one another. ## Use examples ```python from simpletransformers.classification import ClassificationModel model_args = { "num_train_epochs": 6, "learning_rate": 3e-6, "train_batch_size": 69} model = ClassificationModel( "roberta", "5roop/roberta-base-frenk-hate", use_cuda=True, args=model_args ) predictions, logit_output = model.predict(["Build the wall", "Build the wall of trust"] ) predictions ### Output: ### array([1, 0]) ``` ## Citation If you use the model, please cite the following paper on which the original model is based: ``` @article{DBLP:journals/corr/abs-1907-11692, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach}, journal = {CoRR}, volume = {abs/1907.11692}, year = {2019}, url = {http://arxiv.org/abs/1907.11692}, archivePrefix = {arXiv}, eprint = {1907.11692}, timestamp = {Thu, 01 Aug 2019 08:59:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` and the dataset used for fine-tuning: ``` @misc{ljubešić2019frenk, title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English}, author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec}, year={2019}, eprint={1906.02045}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/1906.02045} } ```
21422e77c86c200b9a213cebd39c6183
cj-mills/bert-base-uncased-issues-128
cj-mills
bert
10
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,951
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1071 | 1.0 | 291 | 1.6964 | | 1.6421 | 2.0 | 582 | 1.4279 | | 1.4853 | 3.0 | 873 | 1.3924 | | 1.4014 | 4.0 | 1164 | 1.3701 | | 1.3388 | 5.0 | 1455 | 1.1944 | | 1.283 | 6.0 | 1746 | 1.2795 | | 1.2394 | 7.0 | 2037 | 1.2671 | | 1.2014 | 8.0 | 2328 | 1.2084 | | 1.1668 | 9.0 | 2619 | 1.1783 | | 1.14 | 10.0 | 2910 | 1.2076 | | 1.1277 | 11.0 | 3201 | 1.2081 | | 1.1053 | 12.0 | 3492 | 1.1628 | | 1.0819 | 13.0 | 3783 | 1.2544 | | 1.0763 | 14.0 | 4074 | 1.1695 | | 1.0634 | 15.0 | 4365 | 1.1157 | | 1.0637 | 16.0 | 4656 | 1.2526 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
c47f5cd9227a8bb44ba8811a475a1813
huggingnft/boredapeyachtclub__2__mutant-ape-yacht-club
huggingnft
null
3
0
null
1
image-to-image
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['huggan', 'gan', 'image-to-image', 'huggingnft', 'nft', 'image', 'images']
false
true
true
8,484
false
# CycleGAN for unpaired image-to-image translation. ## Model description CycleGAN for unpaired image-to-image translation. Given two image domains A and B, the following components are trained end2end to translate between such domains: - A generator A to B, named G_AB conditioned on an image from A - A generator B to A, named G_BA conditioned on an image from B - A domain classifier D_A, associated with G_AB - A domain classifier D_B, associated with G_BA At inference time, G_AB or G_BA are relevant to translate images, respectively A to B or B to A. In the general setting, this technique provides style transfer functionalities between the selected image domains A and B. This allows to obtain a generated translation by G_AB, of an image from domain A that resembles the distribution of the images from domain B, and viceversa for the generator G_BA. Under these framework, these aspects have been used to perform style transfer between NFT collections. A collection is selected as domain A, another one as domain B and the CycleGAN provides forward and backward translation between A and B. This has showed to allows high quality translation even in absence of paired sample-ground-truth data. In particular, the model performs well with stationary backgrounds (no drastic texture changes in the appearance of backgrounds) as it is capable of recognizing the attributes of each of the elements of an NFT collections. An attribute can be a variation in type of dressed fashion items such as sunglasses, earrings, clothes and also face or body attributes with respect to a common template model of the given NFT collection). ## Intended uses & limitations #### How to use ```python import torch from PIL import Image from huggan.pytorch.cyclegan.modeling_cyclegan import GeneratorResNet from torchvision import transforms as T from torchvision.transforms import Compose, Resize, ToTensor, Normalize from torchvision.utils import make_grid from huggingface_hub import hf_hub_download, file_download from accelerate import Accelerator import json def load_lightweight_model(model_name): file_path = file_download.hf_hub_download( repo_id=model_name, filename="config.json" ) config = json.loads(open(file_path).read()) organization_name, name = model_name.split("/") model = Trainer(**config, organization_name=organization_name, name=name) model.load(use_cpu=True) model.accelerator = Accelerator() return model def get_concat_h(im1, im2): dst = Image.new('RGB', (im1.width + im2.width, im1.height)) dst.paste(im1, (0, 0)) dst.paste(im2, (im1.width, 0)) return dst n_channels = 3 image_size = 256 input_shape = (image_size, image_size) transform = Compose([ T.ToPILImage(), T.Resize(input_shape), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) # load the translation model from source to target images: source will be generated by a separate Lightweight GAN, w # while the target images are the result of the translation applied by the GeneratorResnet to the generated source images. # Hence, given the source domain A and target domain B, # B = Translator(GAN(A)) translator = GeneratorResNet.from_pretrained(f'huggingnft/{model_name}', input_shape=(n_channels, image_size, image_size), num_residual_blocks=9) # sample noise that is used to generate source images by the z = torch.randn(nrows, 100, 1, 1) # load the GAN generator of source images that will be translated by the translation model model = load_lightweight_model(f"huggingnft/{model_name.split('__2__')[0]}") collectionA = model.generate_app( num=timestamped_filename(), nrow=nrows, checkpoint=-1, types="default" )[1] # resize to translator model input shape resize = T.Resize((256, 256)) input = resize(collectionA) # translate the resized collectionA to collectionB collectionB = translator(input) out_transform = T.ToPILImage() results = [] for collA_image, collB_image in zip(input, collectionB): results.append( get_concat_h(out_transform(make_grid(collA_image, nrow=1, normalize=True)), out_transform(make_grid(collB_image, nrow=1, normalize=True))) ) ``` #### Limitations and bias Translation between collections provides exceptional output images in the case of NFT collections that portray subjects in the same way. If the backgrounds vary too much within either of the collections, performance degrades or many more training iterations re required to achieve acceptable results. ## Training data The CycleGAN model is trained on an unpaired dataset of samples from two selected NFT collections: colle tionA and collectionB. To this end, two collections are loaded by means of the function load_dataset in the huggingface library, as follows. A list of all available collections is available at [huggingNFT](https://huggingface.co/huggingnft) ```python from datasets import load_dataset collectionA = load_dataset("huggingnft/COLLECTION_A") collectionB = load_dataset("huggingnft/COLLECTION_B") ``` ## Training procedure #### Preprocessing The following transformations are applied to each input sample of collectionA and collectionB. The input size is fixed to RGB images of height, width = 256, 256 ```python n_channels = 3 image_size = 256 input_shape = (image_size, image_size) transform = Compose([ T.ToPILImage(), T.Resize(input_shape), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) ``` #### Hardware The configuration has been tested on single GPU setup on a RTX5000 and A5000, as well as multi-gpu single-rank distributed setups composed of 2 of the mentioned GPUs. #### Hyperparameters The following configuration has been kept fixed for all translation models: - learning rate 0.0002 - number of epochs 200 - learning rate decay activation at epoch 80 - number of residual blocks of the cyclegan 9 - cycle loss weight 10.0 - identity loss weight 5.0 - optimizer ADAM with beta1 0.5 and beta2 0.999 - batch size 8 - NO mixed precision training ## Eval results #### Training reports [Cryptopunks to boreapeyachtclub](https://wandb.ai/chris1nexus/experiments--experiments_cyclegan_punk_to_apes_HQ--0/reports/CycleGAN-training-report--VmlldzoxODUxNzQz?accessToken=vueurpbhd2i8n347j880yakggs0sqdf7u0hpz3bpfsbrxcmk1jk4obg18f6wfk9w) [Boreapeyachtclub to mutant-ape-yacht-club](https://wandb.ai/chris1nexus/experiments--my_paperspace_boredapeyachtclub__2__mutant-ape-yacht-club--11/reports/CycleGAN-training-report--VmlldzoxODUxNzg4?accessToken=jpyviwn7kdf5216ycrthwp6l8t3heb0lt8djt7dz12guu64qnpdh3ekecfcnoahu) #### Generated Images In the provided images, row0 and row2 represent real images from the respective collections. Row1 is the translation of the immediate above images in row0 by means of the G_AB translation model. Row3 is the translation of the immediate above images in row2 by means of the G_BA translation model. Visualization over the training iterations for [boreapeyachtclub to mutant-ape-yacht-club](https://wandb.ai/chris1nexus/experiments--my_paperspace_boredapeyachtclub__2__mutant-ape-yacht-club--11/reports/Shared-panel-22-04-15-08-04-99--VmlldzoxODQ0MDI3?accessToken=45m3kxex5m3rpev3s6vmrv69k3u9p9uxcsp2k90wvbxwxzlqbqjqlnmgpl9265c0) Visualization over the training iterations for [Cryptopunks to boreapeyachtclub](https://wandb.ai/chris1nexus/experiments--experiments_cyclegan_punk_to_apes_HQ--0/reports/Shared-panel-22-04-17-11-04-83--VmlldzoxODUxNjk5?accessToken=o25si6nflp2xst649vt6ayt56bnb95mxmngt1ieso091j2oazmqnwaf4h78vc2tu) ### References ```bibtex @misc{https://doi.org/10.48550/arxiv.1703.10593, doi = {10.48550/ARXIV.1703.10593}, url = {https://arxiv.org/abs/1703.10593}, author = {Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A.}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### BibTeX entry and citation info ```bibtex @InProceedings{huggingnft, author={Aleksey Korshuk, Christian Cancedda} year=2022 } ```
6083008688ef4ad09ff4d56977803903
sd-dreambooth-library/drag-queen-shangela
sd-dreambooth-library
null
19
8
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image']
false
true
true
1,216
false
### drag_queen_Shangela on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook #### Model by chrisin2d This your the Stable Diffusion model fine-tuned the drag_queen_Shangela concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept:
6f0fe8379702de3829b7657c18fbc47b
google/multiberts-seed_1-step_700k
google
bert
8
13
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_700k']
false
true
true
3,521
false
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 700k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_700k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
314da41f03ac88c14ce1beeda14c72f4
doc2query/msmarco-german-mt5-base-v1
doc2query
mt5
10
455
transformers
2
text2text-generation
true
false
false
apache-2.0
['de']
['unicamp-dl/mmarco']
null
0
0
0
0
0
0
0
[]
false
true
true
3,823
false
# doc2query/msmarco-german-mt5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'doc2query/msmarco-german-mt5-base-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert." def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad(): # Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality sampling_outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, top_k=10, num_return_sequences=5 ) # Here we use Beam-search. It generates better quality queries, but with less diversity beam_outputs = model.generate( input_ids=input_ids, max_length=64, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) print("Paragraph:") print(para) print("\nBeam Outputs:") for i in range(len(beam_outputs)): query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') print("\nSampling Outputs:") for i in range(len(sampling_outputs)): query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') create_queries(text) ``` **Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it. ## Training This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
d7b6e64363dc78d6a44dc361d5a3482a
slplab/wav2vec2_xlsr50k_english_phoneme
slplab
wav2vec2
12
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,765
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_xlsr50k_english_phoneme This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [the TIMIT dataset](https://catalog.ldc.upenn.edu/LDC93s1). It achieves the following results on the evaluation set: - Loss: 0.5783 - Cer: 0.1178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8403 | 6.94 | 500 | 1.1345 | 0.4657 | | 0.5795 | 13.88 | 1000 | 0.3579 | 0.1169 | | 0.3567 | 20.83 | 1500 | 0.3866 | 0.1174 | | 0.2717 | 27.77 | 2000 | 0.4219 | 0.1169 | | 0.2135 | 34.72 | 2500 | 0.4861 | 0.1199 | | 0.1664 | 41.66 | 3000 | 0.5490 | 0.1179 | | 0.1375 | 48.61 | 3500 | 0.5783 | 0.1178 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.1 - Datasets 1.13.3 - Tokenizers 0.12.1
3fe8cdcb222346950f4d6fa45c6b71f4
marcus2000/model_for_inca
marcus2000
distilbert
16
4
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,065
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_for_inca This model is a fine-tuned version of [marcus2000/finetuning-sentiment-model-3000-samples](https://huggingface.co/marcus2000/finetuning-sentiment-model-3000-samples) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3349 - F1: 0.9281 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
f964b3513d1e2c3898dfb557e5fbea3b
Yehor/wav2vec2-xls-r-1b-uk-with-lm
Yehor
wav2vec2
24
14
transformers
3
automatic-speech-recognition
true
false
false
apache-2.0
['uk']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
1
0
1
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'uk']
true
true
true
2,589
false
# Ukrainian STT model (with Language Model) 🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk ⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UK dataset. It achieves the following results on the evaluation set without the language model: - Loss: 0.1875 - Wer: 0.2033 - Cer: 0.0384 ## Model description On 100 test example the model shows the following results: Without LM: - WER: 0.1862 - CER: 0.0277 With LM: - WER: 0.1218 - CER: 0.0190 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.2815 | 7.93 | 500 | 0.3536 | 0.4753 | 0.1009 | | 1.0869 | 15.86 | 1000 | 0.2317 | 0.3111 | 0.0614 | | 0.9984 | 23.8 | 1500 | 0.2022 | 0.2676 | 0.0521 | | 0.975 | 31.74 | 2000 | 0.1948 | 0.2469 | 0.0487 | | 0.9306 | 39.67 | 2500 | 0.1916 | 0.2377 | 0.0464 | | 0.8868 | 47.61 | 3000 | 0.1903 | 0.2257 | 0.0439 | | 0.8424 | 55.55 | 3500 | 0.1786 | 0.2206 | 0.0423 | | 0.8126 | 63.49 | 4000 | 0.1849 | 0.2160 | 0.0416 | | 0.7901 | 71.42 | 4500 | 0.1869 | 0.2138 | 0.0413 | | 0.7671 | 79.36 | 5000 | 0.1855 | 0.2075 | 0.0394 | | 0.7467 | 87.3 | 5500 | 0.1884 | 0.2049 | 0.0389 | | 0.731 | 95.24 | 6000 | 0.1877 | 0.2060 | 0.0387 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.18.1.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python eval.py --model_id Yehor/wav2vec2-xls-r-1b-uk-with-lm --dataset mozilla-foundation/common_voice_7_0 --config uk --split test ``` ### Eval results on Common Voice 7 "test" (WER): | Without LM | With LM (run `./eval.py`) | |---|---| | 21.52 | 14.62 |
a1a650e249808a23788b1ff3e9a585ef
sd-dreambooth-library/backpack
sd-dreambooth-library
null
20
3
diffusers
0
null
false
false
false
mit
null
null
null
2
2
0
0
0
0
0
[]
false
true
true
725
false
### Backpack on Stable Diffusion via Dreambooth #### model by homanp This your the Stable Diffusion model fine-tuned the Backpack concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks backpack** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/backpack/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/backpack/resolve/main/concept_images/0.jpeg)
1e5a595eb4d1077e0e194aadb8f48e27
spacy/de_core_news_md
spacy
null
32
32
spacy
0
token-classification
false
false
false
mit
['de']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
31,287
false
### Details: https://spacy.io/models/de#de_core_news_md German pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner. | Feature | Description | | --- | --- | | **Name** | `de_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [TIGER Corpus](https://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html) (Brants, Sabine, Stefanie Dipper, Peter Eisenberg, Silvia Hansen, Esther König, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit)<br />[Tiger2Dep](https://www.ims.uni-stuttgart.de/forschung/ressourcen/werkzeuge/tiger2dep/) (Wolfgang Seeker)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (772 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$(`, `$,`, `$.`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPRART`, `APZR`, `ART`, `CARD`, `FM`, `ITJ`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NN`, `NNE`, `PDAT`, `PDS`, `PIAT`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PROAV`, `PTKA`, `PTKANT`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `TRUNC`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VMPP`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY`, `_SP` | | **`morphologizer`** | `POS=PUNCT`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `POS=VERB\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Degree=Pos\|POS=ADV`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=SCONJ`, `Case=Acc\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PART`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=PROPN`, `POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=NUM`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADP`, `Gender=Neut\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=SCONJ\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=X`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=SPACE`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=AUX\|VerbForm=Inf`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=AUX\|VerbForm=Part`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Degree=Sup\|POS=ADV`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|POS=NOUN`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|POS=PROPN`, `Case=Gen\|Definite=Def\|POS=DET\|PronType=Art`, `Case=Gen\|POS=PROPN`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|POS=PRON\|PronType=Dem`, `Definite=Ind\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Neut\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, _(truncated: full list in pipeline meta)_ | | **`parser`** | `ROOT`, `ac`, `adc`, `ag`, `ams`, `app`, `avc`, `cc`, `cd`, `cj`, `cm`, `cp`, `cvc`, `da`, `dep`, `dm`, `ep`, `ju`, `mnr`, `mo`, `ng`, `nk`, `nmc`, `oa`, `oc`, `og`, `op`, `par`, `pd`, `pg`, `ph`, `pm`, `pnc`, `punct`, `rc`, `re`, `rs`, `sb`, `sbp`, `svp`, `uc`, `vo` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.96 | | `TOKEN_P` | 99.92 | | `TOKEN_R` | 99.90 | | `TOKEN_F` | 99.91 | | `TAG_ACC` | 97.81 | | `POS_ACC` | 98.29 | | `MORPH_ACC` | 91.51 | | `MORPH_MICRO_P` | 95.69 | | `MORPH_MICRO_R` | 95.61 | | `MORPH_MICRO_F` | 95.65 | | `SENTS_P` | 95.41 | | `SENTS_R` | 96.22 | | `SENTS_F` | 95.08 | | `DEP_UAS` | 92.54 | | `DEP_LAS` | 90.57 | | `LEMMA_ACC` | 97.70 | | `ENTS_P` | 84.39 | | `ENTS_R` | 83.43 | | `ENTS_F` | 83.91 |
4e7d168011e284d3b02036b82ae308a0
wangpuupup/whisper_test
wangpuupup
whisper
27
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['data/copas']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
2,557
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small dysarthric Dutch This model is a fine-tuned version of [qmeeus/whisper-small-nl](https://huggingface.co/qmeeus/whisper-small-nl) on the data/copas copas-full dataset. It achieves the following results on the evaluation set: - Loss: 0.4242 - Wer: 24.5560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.3363 | 2.02 | 500 | 0.3762 | 29.7934 | | 0.0945 | 5.02 | 1000 | 0.3418 | 27.6912 | | 0.0332 | 8.01 | 1500 | 0.3353 | 26.1689 | | 0.0147 | 11.01 | 2000 | 0.3476 | 26.1327 | | 0.0071 | 14.01 | 2500 | 0.3623 | 25.9333 | | 0.0034 | 17.01 | 3000 | 0.3789 | 25.2084 | | 0.0024 | 20.01 | 3500 | 0.3827 | 24.8641 | | 0.0026 | 23.01 | 4000 | 0.3877 | 25.3171 | | 0.0021 | 26.01 | 4500 | 0.3933 | 25.4259 | | 0.0014 | 29.01 | 5000 | 0.3941 | 25.0997 | | 0.0008 | 32.01 | 5500 | 0.4014 | 25.0997 | | 0.0004 | 35.01 | 6000 | 0.4035 | 24.8278 | | 0.0003 | 38.01 | 6500 | 0.4080 | 24.9184 | | 0.0003 | 41.01 | 7000 | 0.4120 | 24.8097 | | 0.0002 | 44.01 | 7500 | 0.4151 | 24.6104 | | 0.0002 | 47.01 | 8000 | 0.4176 | 24.3929 | | 0.0002 | 50.01 | 8500 | 0.4200 | 24.5198 | | 0.0001 | 53.0 | 9000 | 0.4230 | 24.5198 | | 0.0001 | 56.0 | 9500 | 0.4252 | 24.4291 | | 0.0001 | 59.0 | 10000 | 0.4242 | 24.5560 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
d890782f79663934307e44c5f3f61c38
sudo-s/new_exper3
sudo-s
vit
14
11
transformers
0
image-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['image-classification', 'generated_from_trainer']
true
true
true
4,377
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # new_exper3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset. It achieves the following results on the evaluation set: - Loss: 0.3000 - Accuracy: 0.9298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Apex, opt level O1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.093 | 0.16 | 100 | 4.1045 | 0.1885 | | 3.5057 | 0.31 | 200 | 3.4448 | 0.3231 | | 2.9116 | 0.47 | 300 | 2.9483 | 0.4537 | | 2.561 | 0.63 | 400 | 2.5700 | 0.5258 | | 2.1611 | 0.78 | 500 | 2.1721 | 0.6145 | | 1.715 | 0.94 | 600 | 1.8255 | 0.6407 | | 1.2752 | 1.1 | 700 | 1.5340 | 0.7051 | | 1.2487 | 1.25 | 800 | 1.3533 | 0.7201 | | 1.0333 | 1.41 | 900 | 1.1474 | 0.7826 | | 0.8856 | 1.56 | 1000 | 1.0914 | 0.7645 | | 0.7512 | 1.72 | 1100 | 0.8893 | 0.8119 | | 0.747 | 1.88 | 1200 | 0.8370 | 0.8304 | | 0.5082 | 2.03 | 1300 | 0.7131 | 0.8566 | | 0.4449 | 2.19 | 1400 | 0.6573 | 0.8547 | | 0.2912 | 2.35 | 1500 | 0.6184 | 0.8597 | | 0.285 | 2.5 | 1600 | 0.5974 | 0.8570 | | 0.2267 | 2.66 | 1700 | 0.5621 | 0.8647 | | 0.2553 | 2.82 | 1800 | 0.5044 | 0.8816 | | 0.2029 | 2.97 | 1900 | 0.4342 | 0.8955 | | 0.1763 | 3.13 | 2000 | 0.4487 | 0.8905 | | 0.1418 | 3.29 | 2100 | 0.4173 | 0.9005 | | 0.0563 | 3.44 | 2200 | 0.3870 | 0.9048 | | 0.0579 | 3.6 | 2300 | 0.3849 | 0.9036 | | 0.166 | 3.76 | 2400 | 0.3933 | 0.9025 | | 0.11 | 3.91 | 2500 | 0.3918 | 0.9056 | | 0.0356 | 4.07 | 2600 | 0.3298 | 0.9202 | | 0.0513 | 4.23 | 2700 | 0.3371 | 0.9210 | | 0.0762 | 4.38 | 2800 | 0.3253 | 0.9225 | | 0.018 | 4.54 | 2900 | 0.3467 | 0.9148 | | 0.0263 | 4.69 | 3000 | 0.3544 | 0.9144 | | 0.0205 | 4.85 | 3100 | 0.3340 | 0.9221 | | 0.0237 | 5.01 | 3200 | 0.3353 | 0.9144 | | 0.013 | 5.16 | 3300 | 0.3218 | 0.9229 | | 0.0116 | 5.32 | 3400 | 0.3088 | 0.9291 | | 0.0119 | 5.48 | 3500 | 0.3047 | 0.9279 | | 0.0098 | 5.63 | 3600 | 0.3063 | 0.9283 | | 0.0086 | 5.79 | 3700 | 0.3074 | 0.9268 | | 0.0081 | 5.95 | 3800 | 0.3220 | 0.9237 | | 0.0078 | 6.1 | 3900 | 0.3064 | 0.9268 | | 0.0074 | 6.26 | 4000 | 0.3062 | 0.9279 | | 0.0068 | 6.42 | 4100 | 0.3051 | 0.9291 | | 0.006 | 6.57 | 4200 | 0.3000 | 0.9298 | | 0.0075 | 6.73 | 4300 | 0.3010 | 0.9310 | | 0.0057 | 6.89 | 4400 | 0.3037 | 0.9298 | | 0.0058 | 7.04 | 4500 | 0.3071 | 0.9279 | | 0.0075 | 7.2 | 4600 | 0.3075 | 0.9283 | | 0.0066 | 7.36 | 4700 | 0.3077 | 0.9295 | | 0.0056 | 7.51 | 4800 | 0.3084 | 0.9295 | | 0.0053 | 7.67 | 4900 | 0.3064 | 0.9310 | | 0.0057 | 7.82 | 5000 | 0.3068 | 0.9318 | | 0.0055 | 7.98 | 5100 | 0.3068 | 0.9318 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.5.1 - Datasets 2.3.2 - Tokenizers 0.12.1
6d70d49090c8c2a8be9f222724f7aee7
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab
patrickvonplaten
wav2vec2
12
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,735
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4055 - Wer: 0.4800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 | | 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 | | 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 | | 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 | | 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 | | 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 | | 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
afd146fad7482e60a6238b642e8efe8e
Olwflynn/test-trainer-init
Olwflynn
bert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,376
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer-init This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6581 - Accuracy: 0.8603 - F1: 0.9042 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.3660 | 0.8505 | 0.8893 | | 0.5003 | 2.0 | 918 | 0.5355 | 0.8407 | 0.8922 | | 0.2654 | 3.0 | 1377 | 0.6581 | 0.8603 | 0.9042 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
04afefda61e322f69ed8854e25047ddd
sd-dreambooth-library/persona-5-shigenori-style
sd-dreambooth-library
null
26
34
diffusers
6
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image']
false
true
true
1,660
false
### Persona-5-Shigenori-Style Dreambooth model trained by Allenbv with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: 3200 Steps, 20% text encoder, 23 images "Shigenori Style" on your prompt ![descarga 0](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(12).png) ![descarga 1](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(10).png) ![descarga 2](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(3).png) ![descarga 3](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(4).png) ![descarga 4](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(5).png) ![descarga 5](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(8).png) ![descarga 6](https://huggingface.co/sd-dreambooth-library/persona-5-shigenori-style/resolve/main/concept_images/descarga_(10).png)
d1df63470c0e3368e0beb5dc95f2ebbb
espnet/kan-bayashi_jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jacon-truncated-f45dcb
espnet
null
21
0
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['ja']
['jsut']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
1,873
false
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381100/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ea34aed45c47d9513078ea8a24819084
team-nave/distilbert-base-uncased-finetuned-clinc
team-nave
distilbert
12
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,476
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 1.7601 - Accuracy: 0.8532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 159 | 3.9593 | 0.6442 | | 4.0539 | 2.0 | 318 | 2.9237 | 0.7606 | | 4.0539 | 3.0 | 477 | 2.2412 | 0.8174 | | 2.3862 | 4.0 | 636 | 1.8768 | 0.8397 | | 2.3862 | 5.0 | 795 | 1.7601 | 0.8532 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1 - Datasets 1.16.1 - Tokenizers 0.10.3
d7596b645d63bbb3d6369cb6e7b70b33
luke-thorburn/suggest-conclusion-soft
luke-thorburn
gpt_neo
4
4
transformers
0
text-generation
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['argumentation']
false
true
true
1,648
false
# Generate the conclusion of an argument This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review. # Prompt Template ``` [prepended soft prompt]- [premise 1] - [premise 2] ... - [premise n] Conclusion: [generated conclusion] ``` # Dataset The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/). # Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon. # Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
539c9b0ba717b2e713cb7cfea447d2db
jonatasgrosman/exp_w2v2t_et_r-wav2vec2_s732
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['et']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'et']
false
true
true
462
false
# exp_w2v2t_et_r-wav2vec2_s732 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f4a3a34e059d0b61dae8da927decf461
sd-concepts-library/vb-mox
sd-concepts-library
null
13
0
null
7
null
false
false
false
mit
null
null
null
0
0
0
0
1
1
0
[]
false
true
true
1,390
false
### vb-mox on Stable Diffusion This is the `<vb-mox>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<vb-mox> 0](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/5.jpeg) ![<vb-mox> 1](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/6.jpeg) ![<vb-mox> 2](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/3.jpeg) ![<vb-mox> 3](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/0.jpeg) ![<vb-mox> 4](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/2.jpeg) ![<vb-mox> 5](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/7.jpeg) ![<vb-mox> 6](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/1.jpeg) ![<vb-mox> 7](https://huggingface.co/sd-concepts-library/vb-mox/resolve/main/concept_images/4.jpeg)
1479a93668b5a3cf930834183112e5aa
adache/xlm-roberta-base-finetuned-panx-de-fr
adache
xlm-roberta
9
6
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1644 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 | | 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 | | 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1f33c4e9783dcf34f9d6b190249e20c6
responsibility-framing/predict-perception-bert-focus-assassin
responsibility-framing
bert
12
21
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
7,992
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-bert-focus-assassin This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2964 - Rmse: 0.8992 - Rmse Focus::a Sull'assassino: 0.8992 - Mae: 0.7331 - Mae Focus::a Sull'assassino: 0.7331 - R2: 0.6500 - R2 Focus::a Sull'assassino: 0.6500 - Cos: 0.7391 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.6131 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0674 | 1.0 | 15 | 0.9851 | 1.6393 | 1.6393 | 1.5316 | 1.5316 | -0.1633 | -0.1633 | 0.1304 | 0.0 | 0.5 | 0.2457 | nan | | 1.0099 | 2.0 | 30 | 0.8921 | 1.5601 | 1.5601 | 1.4317 | 1.4317 | -0.0535 | -0.0535 | 0.5652 | 0.0 | 0.5 | 0.4734 | nan | | 0.9295 | 3.0 | 45 | 0.7345 | 1.4155 | 1.4155 | 1.3113 | 1.3113 | 0.1327 | 0.1327 | 0.5652 | 0.0 | 0.5 | 0.3596 | nan | | 0.8485 | 4.0 | 60 | 0.7282 | 1.4094 | 1.4094 | 1.2678 | 1.2678 | 0.1401 | 0.1401 | 0.7391 | 0.0 | 0.5 | 0.5367 | nan | | 0.7551 | 5.0 | 75 | 0.5966 | 1.2758 | 1.2758 | 1.1144 | 1.1144 | 0.2955 | 0.2955 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan | | 0.5563 | 6.0 | 90 | 0.4578 | 1.1175 | 1.1175 | 0.9105 | 0.9105 | 0.4594 | 0.4594 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan | | 0.4048 | 7.0 | 105 | 0.3539 | 0.9826 | 0.9826 | 0.7770 | 0.7770 | 0.5821 | 0.5821 | 0.6522 | 0.0 | 0.5 | 0.5522 | nan | | 0.3319 | 8.0 | 120 | 0.2938 | 0.8953 | 0.8953 | 0.7110 | 0.7110 | 0.6530 | 0.6530 | 0.6522 | 0.0 | 0.5 | 0.6021 | nan | | 0.2224 | 9.0 | 135 | 0.3455 | 0.9708 | 0.9708 | 0.7607 | 0.7607 | 0.5921 | 0.5921 | 0.6522 | 0.0 | 0.5 | 0.3911 | nan | | 0.1794 | 10.0 | 150 | 0.2719 | 0.8612 | 0.8612 | 0.6768 | 0.6768 | 0.6790 | 0.6790 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.1553 | 11.0 | 165 | 0.2855 | 0.8826 | 0.8826 | 0.7053 | 0.7053 | 0.6628 | 0.6628 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.1008 | 12.0 | 180 | 0.3000 | 0.9046 | 0.9046 | 0.7255 | 0.7255 | 0.6458 | 0.6458 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan | | 0.1121 | 13.0 | 195 | 0.2817 | 0.8766 | 0.8766 | 0.7236 | 0.7236 | 0.6674 | 0.6674 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.08 | 14.0 | 210 | 0.3504 | 0.9777 | 0.9777 | 0.7631 | 0.7631 | 0.5863 | 0.5863 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0802 | 15.0 | 225 | 0.3031 | 0.9094 | 0.9094 | 0.7565 | 0.7565 | 0.6420 | 0.6420 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0685 | 16.0 | 240 | 0.3041 | 0.9109 | 0.9109 | 0.7409 | 0.7409 | 0.6408 | 0.6408 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0592 | 17.0 | 255 | 0.3496 | 0.9767 | 0.9767 | 0.7812 | 0.7812 | 0.5871 | 0.5871 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0625 | 18.0 | 270 | 0.3260 | 0.9430 | 0.9430 | 0.7757 | 0.7757 | 0.6151 | 0.6151 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0589 | 19.0 | 285 | 0.3118 | 0.9222 | 0.9222 | 0.7442 | 0.7442 | 0.6318 | 0.6318 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0518 | 20.0 | 300 | 0.3062 | 0.9140 | 0.9140 | 0.7459 | 0.7459 | 0.6384 | 0.6384 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0456 | 21.0 | 315 | 0.3200 | 0.9344 | 0.9344 | 0.7592 | 0.7592 | 0.6221 | 0.6221 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0477 | 22.0 | 330 | 0.3132 | 0.9244 | 0.9244 | 0.7532 | 0.7532 | 0.6301 | 0.6301 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0448 | 23.0 | 345 | 0.3006 | 0.9056 | 0.9056 | 0.7321 | 0.7321 | 0.6450 | 0.6450 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan | | 0.0494 | 24.0 | 360 | 0.2985 | 0.9024 | 0.9024 | 0.7463 | 0.7463 | 0.6475 | 0.6475 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0369 | 25.0 | 375 | 0.3039 | 0.9105 | 0.9105 | 0.7359 | 0.7359 | 0.6412 | 0.6412 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0456 | 26.0 | 390 | 0.2989 | 0.9030 | 0.9030 | 0.7210 | 0.7210 | 0.6471 | 0.6471 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.044 | 27.0 | 405 | 0.2997 | 0.9042 | 0.9042 | 0.7418 | 0.7418 | 0.6461 | 0.6461 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0352 | 28.0 | 420 | 0.2970 | 0.9001 | 0.9001 | 0.7346 | 0.7346 | 0.6493 | 0.6493 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0429 | 29.0 | 435 | 0.2970 | 0.9001 | 0.9001 | 0.7281 | 0.7281 | 0.6493 | 0.6493 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0378 | 30.0 | 450 | 0.2964 | 0.8992 | 0.8992 | 0.7331 | 0.7331 | 0.6500 | 0.6500 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
236d8773799c2183104948e0045c5004
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
StivenLancheros
roberta
14
205
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,821
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset. It achieves the following results on the evaluation set: - Loss: 0.2043 - Precision: 0.8666 - Recall: 0.8614 - F1: 0.8639 - Accuracy: 0.9734 ## Model description This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated. To improve F1 score the transfer learning was completed in two steps. Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English. Biobert --> Augmented CRAFT --> CRAFT ES (MT translated) ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0088 | 1.0 | 1360 | 0.1793 | 0.8616 | 0.8487 | 0.8551 | 0.9721 | | 0.0046 | 2.0 | 2720 | 0.1925 | 0.8618 | 0.8426 | 0.8521 | 0.9713 | | 0.0032 | 3.0 | 4080 | 0.1926 | 0.8558 | 0.8630 | 0.8594 | 0.9725 | | 0.0011 | 4.0 | 5440 | 0.2043 | 0.8666 | 0.8614 | 0.8639 | 0.9734 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
458be6a13f5227eae492df2029cb6cfe
espnet/Shinji_Watanabe_spgispeech_asr_train_asr_conformer6_n_fft512_hop_lengt-truncated-f1ac86
espnet
null
31
1
espnet
1
automatic-speech-recognition
false
false
false
cc-by-4.0
['en']
['spgispeech']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
1,881
false
## Example ESPnet2 ASR model ### `Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.acc.ave` ♻️ Imported from https://zenodo.org/record/4585546/ This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
f99ce77549e8646468c488016ce7b2bd
mousaazari/t5-small-finetuned-wikisql
mousaazari
t5
17
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,342
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2640 - Rouge2 Precision: 0.8471 - Rouge2 Recall: 0.3841 - Rouge2 Fmeasure: 0.5064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | No log | 1.0 | 11 | 2.7587 | 0.098 | 0.0305 | 0.045 | | No log | 2.0 | 22 | 2.0056 | 0.0969 | 0.0284 | 0.0422 | | No log | 3.0 | 33 | 1.4456 | 0.1046 | 0.0349 | 0.0503 | | No log | 4.0 | 44 | 1.0317 | 0.1054 | 0.0337 | 0.0482 | | No log | 5.0 | 55 | 0.7603 | 0.2749 | 0.1299 | 0.1724 | | No log | 6.0 | 66 | 0.5722 | 0.7115 | 0.352 | 0.4552 | | No log | 7.0 | 77 | 0.4751 | 0.6872 | 0.337 | 0.436 | | No log | 8.0 | 88 | 0.4253 | 0.7256 | 0.3439 | 0.4462 | | No log | 9.0 | 99 | 0.3805 | 0.7335 | 0.3204 | 0.4308 | | No log | 10.0 | 110 | 0.3562 | 0.7342 | 0.3239 | 0.433 | | No log | 11.0 | 121 | 0.3275 | 0.7906 | 0.355 | 0.471 | | No log | 12.0 | 132 | 0.3133 | 0.8382 | 0.3838 | 0.5061 | | No log | 13.0 | 143 | 0.2996 | 0.8409 | 0.3841 | 0.5062 | | No log | 14.0 | 154 | 0.2903 | 0.8304 | 0.3763 | 0.4978 | | No log | 15.0 | 165 | 0.2867 | 0.8409 | 0.3841 | 0.5062 | | No log | 16.0 | 176 | 0.2786 | 0.8409 | 0.3841 | 0.5062 | | No log | 17.0 | 187 | 0.2711 | 0.8409 | 0.3841 | 0.5062 | | No log | 18.0 | 198 | 0.2673 | 0.8409 | 0.3841 | 0.5062 | | No log | 19.0 | 209 | 0.2643 | 0.8471 | 0.3841 | 0.5064 | | No log | 20.0 | 220 | 0.2640 | 0.8471 | 0.3841 | 0.5064 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
d9731ee870ebb48b35dd77571a94e064
jonatasgrosman/exp_w2v2t_th_vp-100k_s497
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['th']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'th']
false
true
true
478
false
# exp_w2v2t_th_vp-100k_s497 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
99a0ceaf876208d574959b0e3aee7f3d
Helsinki-NLP/opus-mt-en-tvl
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-en-tvl * source languages: en * target languages: tvl * OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.tvl | 46.9 | 0.625 |
1119c534322a471b3bdb89347d7c02e5
henryscheible/eval_v2_sst2
henryscheible
bert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
888
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_v2_sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
c1a676d92821077daf231ef35acfe94b
julenalvaro/Perros-VS-gatos-con-vit-base-patch16-224-in21k
julenalvaro
vit
12
1
transformers
0
image-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,564
false
# vit-base-patch16-224-in21k This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1026 - Accuracy: 0.982 ## Model description This model is a fine-tuned version of google/vit-base-patch16-224-in21k which discriminates cats from dogs. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.177 | 0.5 | 500 | 0.2100 | 0.9435 | | 0.1515 | 1.0 | 1000 | 0.0710 | 0.975 | | 0.0443 | 1.5 | 1500 | 0.2043 | 0.9535 | | 0.0625 | 2.0 | 2000 | 0.0898 | 0.9745 | | 0.0181 | 2.5 | 2500 | 0.0961 | 0.9805 | | 0.0091 | 3.0 | 3000 | 0.1049 | 0.982 | | 0.0016 | 3.5 | 3500 | 0.1066 | 0.981 | | 0.0015 | 4.0 | 4000 | 0.1026 | 0.982 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
f3bb102548f9ff8d95ceedef4711b445
sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens
sentence-transformers
xlm-roberta
13
84,798
sentence-transformers
1
sentence-similarity
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,905
false
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens') model = AutoModel.from_pretrained('sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-100langs-bert-base-nli-stsb-mean-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
21e12d4989786724e1335291d38cbe28
menglingbei/t5-small-finetuned-xsum
menglingbei
t5
11
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['xsum']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
920
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
1134eab5d0c9fc349fd6ee0e3aa46153
mrm8488/convnext-tiny-finetuned-eurosat
mrm8488
convnext
11
7
transformers
2
image-classification
true
false
false
apache-2.0
null
['nielsr/eurosat-demo']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'CV', 'ConvNeXT', 'satellite', 'EuroSAT']
true
true
true
2,880
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ConvNeXT (tiny) fine-tuned on EuroSAT This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the [EuroSAT](https://github.com/phelber/eurosat) dataset. It achieves the following results on the evaluation set: - Loss: 0.0549 - Accuracy: 0.9805 #### Drag and drop the following pics in the right widget to test the model ![image1](https://huggingface.co/mrm8488/convnext-tiny-finetuned-eurosat/resolve/main/test1.jpg) ![image2](https://huggingface.co/mrm8488/convnext-tiny-finetuned-eurosat/resolve/main/test2.jpg) ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ## Dataset information **EuroSAT : Land Use and Land Cover Classification with Sentinel-2** In this study, we address the challenge of land use and land cover classification using Sentinel-2 satellite images. The Sentinel-2 satellite images are openly and freely accessible provided in the Earth observation program Copernicus. We present a novel dataset based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes with in total 27,000 labeled and geo-referenced images. We provide benchmarks for this novel dataset with its spectral bands using state-of-the-art deep Convolutional Neural Network (CNNs). With the proposed novel dataset, we achieved an overall classification accuracy of 98.57%. The resulting classification system opens a gate towards a number of Earth observation applications. We demonstrate how this classification system can be used for detecting land use and land cover changes and how it can assist in improving geographical maps. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 7171 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2082 | 1.0 | 718 | 0.1057 | 0.9654 | | 0.1598 | 2.0 | 1436 | 0.0712 | 0.9775 | | 0.1435 | 3.0 | 2154 | 0.0549 | 0.9805 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
02f4df9c2ba7be823ea16731da5485a8
lmqg/mt5-base-itquad-qg
lmqg
mt5
20
102
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['it']
['lmqg/qg_itquad']
null
0
0
0
0
0
0
0
['question generation']
true
true
true
6,463
false
# Model Card of `lmqg/mt5-base-itquad-qg` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="it", model="lmqg/mt5-base-itquad-qg") # model prediction questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-itquad-qg") output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 81.16 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 23.29 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 15.37 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 10.72 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.7 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 18 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 57.11 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 22.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 87.93 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 61.91 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 88.02 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 62.04 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 87.84 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 61.78 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-base-itquad-ae`](https://huggingface.co/lmqg/mt5-base-itquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.lmqg_mt5-base-itquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 81.68 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 55.83 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 81.25 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 55.68 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 82.16 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 56.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 11 - batch: 4 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-itquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
8b85759220e8b3926550b330fe4935f1
Duskfallcrew/duskfall-ani-backgrounds
Duskfallcrew
null
21
39
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
884
false
### Duskfall Ani Backgrounds Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk BgAniDusk (use that on your prompt)
b3c5449d4a3270c3e30e344d921b873a
GIanlucaRub/whisper-tiny-it-7
GIanlucaRub
whisper
59
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['it']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,944
false
# Whisper Tiny it 7 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 2.137834 - Wer: 97.566556 ## Model description This model is the openai whisper small transformer adapted for Italian audio to text transcription. As part of the hyperparameter tuning process weight decay set to 0.1, attention dropout, encoder dropout and decoder dropout have been set to 0.1, the learning rate has been set to 1e-6, the number of decoder attention heads and encoder attention heads have been set to 8 however, it did not improved the performance on the evaluation set. ## Intended uses & limitations The model is available through its [HuggingFace web app](https://huggingface.co/spaces/GIanlucaRub/whisper-it) ## Training and evaluation data Data used for training is the initial 10% of train and validation of [Italian Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/it/train) 11.0 from Mozilla Foundation. The dataset used for evaluation is the initial 10% of test of Italian Common Voice. ## Training procedure After loading the pre trained model, it has been trained on the dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.7353 | 3.82 | 4000 | 2.1378 | 97.5666 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
0c7f7f56a4c391da5a771afcc42c98e4
sriiikar/wav2vec2-hindi
sriiikar
wav2vec2
12
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,626
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-hindi This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8814 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 23.6834 | 6.25 | 100 | 13.5748 | 1.0 | | 8.2358 | 12.5 | 200 | 3.9834 | 1.0 | | 3.6953 | 18.75 | 300 | 3.7861 | 1.0 | | 3.4186 | 25.0 | 400 | 3.8232 | 1.0 | | 3.2462 | 31.25 | 500 | 3.4688 | 1.0 | | 2.8108 | 37.5 | 600 | 2.8814 | 1.0 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.2.3.dev0 - Tokenizers 0.12.1
a9f57a3d849523f9c520050b6200dfef
funnel-transformer/large-base
funnel-transformer
funnel
9
11
transformers
1
feature-extraction
true
true
false
apache-2.0
['en']
['bookcorpus', 'wikipedia', 'gigaword']
null
0
0
0
0
0
0
0
[]
false
true
true
4,140
false
# Funnel Transformer large model (B8-8-8 without decoder) Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. **Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if you need one input per initial token. You should use the `large` model in that case. ## Intended uses & limitations You can use the raw model to extract a vector representation of a given text, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import FunnelTokenizer, FunnelBaseModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base") model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import FunnelTokenizer, TFFunnelBaseModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base") model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data The BERT model was pretrained on: - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books, - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers), - [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages, - [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data, - [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages. ### BibTeX entry and citation info ```bibtex @misc{dai2020funneltransformer, title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing}, author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le}, year={2020}, eprint={2006.03236}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
2fea7a8a102b642e415a142978b60db2
jonatasgrosman/exp_w2v2t_nl_no-pretraining_s399
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['nl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'nl']
false
true
true
414
false
# exp_w2v2t_nl_no-pretraining_s399 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
c8265225a262ce71e8a7ebf010a446fc
davidnai/transformers-qa
davidnai
t5
7
7
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,317
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # transformers-qa This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3199 - Validation Loss: 3.2826 - Train Rougel: tf.Tensor(0.3922559, shape=(), dtype=float32) - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:---------------------------------------------:|:-----:| | 2.3199 | 3.2826 | tf.Tensor(0.3922559, shape=(), dtype=float32) | 0 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
8de23981364e154039ff56732f7f102a
google/t5-efficient-small-dl8
google
t5
12
8
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,251
false
# T5-Efficient-SMALL-DL8 (Deep-Narrow version) T5-Efficient-SMALL-DL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-small-dl8** - is of model type **Small** with the following variations: - **dl** is **8** It has **68.92** million parameters and thus requires *ca.* **275.66 MB** of memory in full precision (*fp32*) or **137.83 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
e4946c2631d213ad4894a3e568c1cf36
aambrioso/distilbert-base-uncased-finetuned-emotion
aambrioso
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,399
false
# distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [emotion]( dataset(https://huggingface.co/datasets/emotion) dataset for in the dataset in HG. It achieves the following results on the evaluation set: - Loss: 0.2033 - Accuracy: 0.9275 - F1: 0.9273 ## Model description This model is a copy of the model found in the book [Natural Language Processing with Transformers](https://github.com/nlp-with-transformers/notebooks/blob/main/02_classification.ipynb). ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.806 | 1.0 | 250 | 0.2954 | 0.908 | 0.9062 | | 0.2361 | 2.0 | 500 | 0.2033 | 0.9275 | 0.9273 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
b243da2bfdb2c381c9c2383006d5415b
hackathon-pln-es/readability-es-3class-sentences
hackathon-pln-es
roberta
9
1
transformers
2
text-classification
true
false
false
cc-by-4.0
['es']
null
null
0
0
0
0
0
0
0
['spanish', 'roberta', 'bertin']
false
true
true
2,870
false
# Readability ES Sentences for three classes Model based on the Roberta architecture finetuned on [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for readability assessment of Spanish texts. ## Description and performance This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs classification among three complexity levels: - Basic. - Intermediate. - Advanced. The relationship of these categories with the Common European Framework of Reference for Languages is described in [our report](https://wandb.ai/readability-es/readability-es/reports/Texts-Readability-Analysis-for-Spanish--VmlldzoxNzU2MDUx). This model achieves a F1 macro average score of 0.6951, measured on the validation set. ## Model variants - [`readability-es-sentences`](https://huggingface.co/hackathon-pln-es/readability-es-sentences). Two classes, sentence-based dataset. - [`readability-es-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-paragraphs). Two classes, paragraph-based dataset. - `readability-es-3class-sentences` (this model). Three classes, sentence-based dataset. - [`readability-es-3class-paragraphs`](https://huggingface.co/hackathon-pln-es/readability-es-3class-paragraphs). Three classes, paragraph-based dataset. ## Datasets - [`readability-es-hackathon-pln-public`](https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public), composed of: * coh-metrix-esp corpus. * Various text resources scraped from websites. - Other non-public datasets: newsela-es, simplext. ## Training details Please, refer to [this training run](https://wandb.ai/readability-es/readability-es/runs/1qe3kbqj/overview) for full details on hyperparameters and training regime. ## Biases and Limitations - Due to the scarcity of data and the lack of a reliable gold test set, performance metrics are reported on the validation set. - One of the datasets involved is the Spanish version of newsela, which is frequently used as a reference. However, it was created by translating previous datasets, and therefore it may contain somewhat unnatural phrases. - Some of the datasets used cannot be publicly disseminated, making it more difficult to assess the existence of biases or mistakes. - Language might be biased towards the Spanish dialect spoken in Spain. Other regional variants might be sub-represented. - No effort has been performed to alleviate the shortcomings and biases described in the [original implementation of BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish#bias-examples-spanish). ## Authors - [Laura Vásquez-Rodríguez](https://lmvasque.github.io/) - [Pedro Cuenca](https://twitter.com/pcuenq) - [Sergio Morales](https://www.fireblend.com/) - [Fernando Alva-Manchego](https://feralvam.github.io/)
ad9e08f0da7d32ee3e3428441d754f1e
kamangir/image-classifier
kamangir
null
37
0
keras
0
null
false
false
false
cc
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,379
false
# Image Classifier `image-classifier` is an extendable TensorFlow image classifier w/ a Bash cli and Hugging Face integration - to see the list of `image-classifier` commands complete [installation](#Installation) and type in: ``` image_classifier ? ``` ## Installation To install `image-classifier` first [install and configure awesome-bash-cli](https://github.com/kamangir/awesome-bash-cli) then run: ``` abcli huggingface clone image-classifier ``` To see the list of `image-classifier` saved models type in ``` image_classifier list ``` You should see the following items: 1. [fashion-mnist](#fashion-mnist) 1. intel-image-classifier 🚧 1. vegetable-classifier 🚧 ## fashion-mnist ![image](./saved_model/fashion-mnist/image_classifier/prediction/00000.jpg) `fashion-mnist` is an `image-classifier` trained on [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist). To retrain `fashion-mnist` type in: ``` abcli select fashion_mnist train abcli upload image_classifier list . browser=1,model=object ``` You should now see the structure of the network (left) and the [content of the model](https://github.com/kamangir/browser) (right). | ![image](./abcli/assets/fashion_mnist_list.png) | ![image](./abcli/assets/fashion_mnist_browsed.png) | |---|---| You can save this model under a new name by typing in: ``` fashion_mnist save new_name_1 ``` / END
dcf3ba5b065c6a9f85ca6809f71f9bab
jonatasgrosman/exp_w2v2t_th_xlsr-53_s711
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['th']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'th']
false
true
true
464
false
# exp_w2v2t_th_xlsr-53_s711 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6248389bd99b5cd26a97f074c8a03df6
muhtasham/tiny-mlm-glue-sst2-target-glue-stsb
muhtasham
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,103
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-sst2-target-glue-stsb This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9195 - Pearson: 0.8130 - Spearmanr: 0.8114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 2.7776 | 2.78 | 500 | 1.1238 | 0.7313 | 0.7669 | | 0.932 | 5.56 | 1000 | 1.0628 | 0.7833 | 0.8086 | | 0.737 | 8.33 | 1500 | 1.0050 | 0.8025 | 0.8208 | | 0.6099 | 11.11 | 2000 | 0.8592 | 0.8165 | 0.8220 | | 0.5164 | 13.89 | 2500 | 0.8875 | 0.8158 | 0.8181 | | 0.4659 | 16.67 | 3000 | 0.9524 | 0.8155 | 0.8198 | | 0.4114 | 19.44 | 3500 | 0.8872 | 0.8173 | 0.8174 | | 0.3728 | 22.22 | 4000 | 0.9423 | 0.8163 | 0.8166 | | 0.3396 | 25.0 | 4500 | 0.9953 | 0.8197 | 0.8202 | | 0.321 | 27.78 | 5000 | 0.9409 | 0.8160 | 0.8160 | | 0.3034 | 30.56 | 5500 | 0.9273 | 0.8142 | 0.8139 | | 0.2811 | 33.33 | 6000 | 0.9195 | 0.8130 | 0.8114 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
d00c7b40fe18d4cb4278c9f549963972
Mustafa21/my_awesome_food_model
Mustafa21
vit
7
0
transformers
0
image-classification
true
false
false
apache-2.0
null
['food101']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,449
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 1.2335 - Accuracy: 0.985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0523 | 1.0 | 50 | 1.9226 | 0.935 | | 1.3718 | 2.0 | 100 | 1.3422 | 0.995 | | 1.2298 | 3.0 | 150 | 1.2335 | 0.985 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
c1fe6d7435d02b62f37f526c78aa1e8c
Rakib/whisper-tiny-bn
Rakib
whisper
36
31
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['bn']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,561
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny Bengali This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 bn dataset. It achieves the following results on the evaluation set: - Loss: 0.2314 - Wer: 32.8977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3362 | 0.96 | 1000 | 0.3536 | 45.0860 | | 0.2395 | 1.91 | 2000 | 0.2745 | 37.1714 | | 0.205 | 2.87 | 3000 | 0.2485 | 34.7353 | | 0.1795 | 3.83 | 4000 | 0.2352 | 33.2469 | | 0.1578 | 4.78 | 5000 | 0.2314 | 32.8977 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
5218c0c8fe81cbe9cc28907cf2a80a60
google/t5-efficient-xl-nl8
google
t5
12
7
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,242
false
# T5-Efficient-XL-NL8 (Deep-Narrow version) T5-Efficient-XL-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-xl-nl8** - is of model type **Xl** with the following variations: - **nl** is **8** It has **972.49** million parameters and thus requires *ca.* **3889.95 MB** of memory in full precision (*fp32*) or **1944.97 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
e0306ae9d1780c774c1647ac55b9679d
alphahg/opus-mt-ko-en-finetuned-ko-to-en-2780616
alphahg
marian
13
179
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,394
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ko-en-finetuned-ko-to-en-2780616 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0458 | 1.0 | 9376 | 0.9283 | | 0.9423 | 2.0 | 18752 | 0.8607 | | 0.9013 | 3.0 | 28128 | 0.8435 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
b046e8a0b7f80d770ce294a3a513ad8d
lmqg/bart-base-squad-qg-ae
lmqg
bart
21
42
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['en']
['lmqg/qg_squad']
null
0
0
0
0
0
0
0
['question generation', 'answer extraction']
true
true
true
7,038
false
# Model Card of `lmqg/bart-base-squad-qg-ae` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-base-squad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-squad-qg-ae") # answer extraction answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") # question generation question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.65 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.53 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 40.97 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 31.71 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 25.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 64.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 52.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 93.45 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 64.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.78 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 63.55 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 94.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 65.49 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 57.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 69.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 65.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 63.06 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 60.47 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 58.31 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 41.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 81.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 68.38 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: facebook/bart-base - max_length: 512 - max_length_output: 32 - epoch: 3 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-squad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
0472ef1be1e53a1f9d972010531341c8
DrishtiSharma/wav2vec2-xls-r-300m-mt-o1
DrishtiSharma
wav2vec2
18
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['mt']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'mt', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
1,755
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: - Loss: 0.1987 - Wer: 0.1920 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Maltese language not found in speech-recognition-community-v2/dev_data! ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.1721 | 18.02 | 2000 | 0.3831 | 0.4066 | | 0.7849 | 36.04 | 4000 | 0.2191 | 0.2417 | | 0.6723 | 54.05 | 6000 | 0.2056 | 0.2134 | | 0.6015 | 72.07 | 8000 | 0.2008 | 0.2031 | | 0.5386 | 90.09 | 10000 | 0.1967 | 0.1953 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
1344c04148be6576a4d88d76f38dbc3b
Helsinki-NLP/opus-mt-bem-fi
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-bem-fi * source languages: bem * target languages: fi * OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.fi | 22.8 | 0.439 |
485d38234d86f8230e783ce5fc24118e
Eppinette/Mona
Eppinette
null
4
0
null
6
text-to-image
false
false
false
mit
['en']
null
null
1
0
1
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
761
false
# Mona Subject Model / Dreambooth Training ## Usage To use this model you have to download the .ckpt file as well as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder To use it in a prompt: ```"Mona woman"``` for highest strength or just "Mona" To increase the strength put "Mona woman" in () brackets To decrease the strength put "Mona woman" in [] brackets Waifu_diffusion base trained model trained to 4,000 steps Have fun :) ## Example Pictures from Mona_4k <table> <tr> <td><img src=https://i.imgur.com/acDDsQZ.png width=150% height=150%/></td> <td><img src=https://i.imgur.com/15PnKDf.png width=100% height=100%/></td> <td><img src=https://i.imgur.com/PWxazM1.png width=150% height=150%/></td> </tr> </table>
b2c30525fcbd821a65fac05a422926e4
darragh/swinunetr-btcv-base
darragh
null
6
0
transformers
0
null
true
false
false
apache-2.0
['en']
['BTCV']
null
0
0
0
0
0
0
0
['btcv', 'medical', 'swin']
false
true
true
4,694
false
# Model Overview This repository contains the code for Swin UNETR [1,2]. Swin UNETR is the state-of-the-art on Medical Segmentation Decathlon (MSD) and Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset. In [1], a novel methodology is devised for pre-training Swin UNETR backbone in a self-supervised manner. We provide the option for training Swin UNETR by fine-tuning from pre-trained self-supervised weights or from scratch. The source repository for the training of these models can be found [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV). # Installing Dependencies Dependencies for training and inference can be installed using the model requirements : ``` bash pip install -r requirements.txt ``` # Intended uses & limitations You can use the raw model for dicom segmentation, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks which segment CAT scans or MRIs on images in dicom format. Dicom meta data mostly differs across medical facilities, so if applying to a new dataset, the model should be finetuned. # How to use To install necessary dependencies, run the below in bash. ``` git clone https://github.com/darraghdog/Project-MONAI-research-contributions pmrc pip install -r pmrc/requirements.txt cd pmrc/SwinUNETR/BTCV ``` To load the model from the hub. ``` >>> from swinunetr import SwinUnetrModelForInference >>> model = SwinUnetrModelForInference.from_pretrained('darragh/swinunetr-btcv-tiny') ``` # Limitations and bias The training data used for this model is specific to CAT scans from certain health facilities and machines. Data from other facilities may difffer in image distributions, and may require finetuning of the models for best performance. # Evaluation results We provide several pre-trained models on BTCV dataset in the following. <table> <tr> <th>Name</th> <th>Dice (overlap=0.7)</th> <th>Dice (overlap=0.5)</th> <th>Feature Size</th> <th># params (M)</th> <th>Self-Supervised Pre-trained </th> </tr> <tr> <td>Swin UNETR/Base</td> <td>82.25</td> <td>81.86</td> <td>48</td> <td>62.1</td> <td>Yes</td> </tr> <tr> <td>Swin UNETR/Small</td> <td>79.79</td> <td>79.34</td> <td>24</td> <td>15.7</td> <td>No</td> </tr> <tr> <td>Swin UNETR/Tiny</td> <td>72.05</td> <td>70.35</td> <td>12</td> <td>4.0</td> <td>No</td> </tr> </table> # Data Preparation ![image](https://lh3.googleusercontent.com/pw/AM-JKLX0svvlMdcrchGAgiWWNkg40lgXYjSHsAAuRc5Frakmz2pWzSzf87JQCRgYpqFR0qAjJWPzMQLc_mmvzNjfF9QWl_1OHZ8j4c9qrbR6zQaDJWaCLArRFh0uPvk97qAa11HtYbD6HpJ-wwTCUsaPcYvM=w1724-h522-no?authuser=0) The training data is from the [BTCV challenge dataset](https://www.synapse.org/#!Synapse:syn3193805/wiki/217752). - Target: 13 abdominal organs including 1. Spleen 2. Right Kidney 3. Left Kideny 4.Gallbladder 5.Esophagus 6. Liver 7. Stomach 8.Aorta 9. IVC 10. Portal and Splenic Veins 11. Pancreas 12.Right adrenal gland 13.Left adrenal gland. - Task: Segmentation - Modality: CT - Size: 30 3D volumes (24 Training + 6 Testing) # Training See the source repository [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV) for information on training. # BibTeX entry and citation info If you find this repository useful, please consider citing the following papers: ``` @inproceedings{tang2022self, title={Self-supervised pre-training of swin transformers for 3d medical image analysis}, author={Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={20730--20740}, year={2022} } @article{hatamizadeh2022swin, title={Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images}, author={Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger and Xu, Daguang}, journal={arXiv preprint arXiv:2201.01266}, year={2022} } ``` # References [1]: Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V. and Hatamizadeh, A., 2022. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20730-20740). [2]: Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.
306bc1f6d766306d4be94b5df595dbc2
sd-concepts-library/sunfish
sd-concepts-library
null
18
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,922
false
### SunFish on Stable Diffusion This is the `<SunFish>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<SunFish> 0](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/4.jpeg) ![<SunFish> 1](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/12.jpeg) ![<SunFish> 2](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/8.jpeg) ![<SunFish> 3](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/0.jpeg) ![<SunFish> 4](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/6.jpeg) ![<SunFish> 5](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/3.jpeg) ![<SunFish> 6](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/11.jpeg) ![<SunFish> 7](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/10.jpeg) ![<SunFish> 8](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/7.jpeg) ![<SunFish> 9](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/2.jpeg) ![<SunFish> 10](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/9.jpeg) ![<SunFish> 11](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/1.jpeg) ![<SunFish> 12](https://huggingface.co/sd-concepts-library/sunfish/resolve/main/concept_images/5.jpeg)
8b95693b8fed43e917a0d769662f9fb1
lmqg/flan-t5-small-squad-qag
lmqg
t5
13
2
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['en']
['lmqg/qag_squad']
null
0
0
0
0
0
0
0
['questions and answers generation']
true
true
true
3,892
false
# Model Card of `lmqg/flan-t5-small-squad-qag` This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/flan-t5-small-squad-qag") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-qag") output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.3 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 63.74 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 92.92 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 65.5 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 91.71 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 62.2 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: ['qag'] - model: google/flan-t5-small - max_length: 512 - max_length_output: 256 - epoch: 14 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-qag/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
5aa966e0eb35d14b596ec280d4c81e9d