text
stringlengths
21
2.11k
label
class label
2 classes
Intended uses & limitations More information needed
1no_dataset_mention
Training and evaluation data More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Training results
1no_dataset_mention
Training and evaluation data More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
all-roberta-large-v1-utility-5-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3728 - Accuracy: 0.3956
0dataset_mention
donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
0dataset_mention
all-roberta-large-v1-travel-9-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1384 - Accuracy: 0.4289
0dataset_mention
DevRoBERTa DevRoBERTa is a Devanagari RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on publicly available Hindi and Marathi monolingual datasets. [project link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] . Citing: ``` @article{joshi2022l3cubehind, title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages}, author={Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11418}, year={2022} } ```
0dataset_mention
Training procedure
1no_dataset_mention
mobilebert_add_GLUE_Experiment_logit_kd_rte_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.3914 - Accuracy: 0.5271
0dataset_mention
K-12BERT model K-12BERT is a model trained by performing continued pretraining on the K-12Corpus. Since, performance of BERT like models on domain adaptive tasks have shown great progress, we noticed the lack of such a model for the education domain (especially K-12 education). On that end we present K-12BERT, a BERT based model trained on our custom curated dataset, extracted from both open and proprietary education resources. The model was trained using an MLM objective and in a continued pretraining fashion, due to the lack of resources available to train the model from ground up. This also, allowed us to save a lot of computational resources and utilize the existing knowledge of BERT. To that extent we also preserve the original vocabulary of BERT, to evaluate the performance under those conditions.
0dataset_mention
Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset.
0dataset_mention
Performance on Text Dataset We conducted experiments on the in-house test sets of the three different domains of Internet, medical care, and finance: <table> <tr><th row_span='2'><th colspan='2'>finance<th colspan='2'>healthcare<th colspan='2'>internet <tr><td><th>0-shot<th>5-shot<th>0-shot<th>5-shot<th>0-shot<th>5-shot <tr><td>uie-base (12L768H)<td>46.43<td>70.92<td><b>71.83</b><td>85.72<td>78.33<td>81.86 <tr><td>uie-medium (6L768H)<td>41.11<td>64.53<td>65.40<td>75.72<td>78.32<td>79.68 <tr><td>uie-mini (6L384H)<td>37.04<td>64.65<td>60.50<td>78.36<td>72.09<td>76.38 <tr><td>uie-micro (4L384H)<td>37.53<td>62.11<td>57.04<td>75.92<td>66.00<td>70.22 <tr><td>uie-nano (4L312H)<td>38.94<td>66.83<td>48.29<td>76.74<td>62.86<td>72.35 <tr><td>uie-m-large (24L1024H)<td><b>49.35</b><td><b>74.55</b><td>70.50<td><b>92.66</b ><td>78.49<td><b>83.02</b> <tr><td>uie-m-base (12L768H)<td>38.46<td>74.31<td>63.37<td>87.32<td>76.27<td>80.13 <tr><td>🧾🎓<b>uie-x-base (12L768H)</b><td>48.84<td>73.87<td>65.60<td>88.81<td><b>79.36</b> <td>81.65 </table> 0-shot means that no training data is directly used for prediction through paddlenlp.Taskflow, and 5-shot means that each category contains 5 pieces of labeled data for model fine-tuning. Experiments show that UIE can further improve the performance with a small amount of data (few-shot). > Detailed Info: https://github.com/PaddlePaddle/PaddleNLP/blob/develop/applications/information_extraction/README_en.md
0dataset_mention
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4626
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
1no_dataset_mention
Training and evaluation data More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00034 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 750 - num_epochs: 50 - mixed_precision_training: Native AMP
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
JimmyWu/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0086 - Validation Loss: 0.0791 - Epoch: 4
0dataset_mention
Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
base-mlm-tweet This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2872
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3184 - Accuracy: 0.8667 - F1: 0.8684
0dataset_mention
Training procedure
1no_dataset_mention
distilbert-base-uncased-finetuned-ft500_4class This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1343 - Accuracy: 0.4853 - F1: 0.4777
0dataset_mention
distilbert-base-uncased-qa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1925
0dataset_mention
MiniLM-L12-H384-uncased__sst2__all-train This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2632 - Accuracy: 0.9055
0dataset_mention
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9271 - Recall: 0.9381 - F1: 0.9326 - Accuracy: 0.9836
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
token_fine_tunned_flipkart_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3435 - Precision: 0.8797 - Recall: 0.9039 - F1: 0.8916 - Accuracy: 0.9061
0dataset_mention
Training details cT5 models used T5's weights as a starting point, and then it was finetuned on the English [wikipedia](https://huggingface.co/datasets/wikipedia) for 3 epochs, achieving ~74% validation accuracy (ct5-small). The training script is in JAX + Flax and can be found in `pretrain_ct5.py`. Flax checkpoints can be converted to PyTorch via `convert_flax_to_pytorch.py [flax_dirname]`.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Haakf/allsides_right_text_headline_padded_overfit This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8995 - Validation Loss: 1.7970 - Epoch: 19
0dataset_mention
bert-base-multilingual-cased-tuned-smartcat This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000
0dataset_mention
Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
sachinsahu/Human_Development_Index-clustered This model is a fine-tuned version of [nandysoham16/4-clustered_aug](https://huggingface.co/nandysoham16/4-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2111 - Train End Logits Accuracy: 0.9583 - Train Start Logits Accuracy: 0.9479 - Validation Loss: 0.0171 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
0dataset_mention
rdpatilds/distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.6914 - Validation Loss: 2.5383 - Epoch: 0
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
BERiT_2000 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.7293
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
distilbert-base-uncased-finetuned2-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Romanian paraphrase ![v1.0](https://img.shields.io/badge/V.1-03.08.2022-brightgreen) Fine-tune t5-small model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v1). The dataset contains ~60k examples.
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. You need to install Enelvo, an open-source spell correction trained with Twitter user posts `pip install enelvo` ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from enelvo import normaliser import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) norm = normaliser.Normaliser()
0dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
1no_dataset_mention
Training procedure
1no_dataset_mention
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [Tianyi98/opt-350m-finetuned-cola](https://huggingface.co/Tianyi98/opt-350m-finetuned-cola) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4133 - Accuracy: 0.92 - F1: 0.9205
0dataset_mention
A tiny GPT2 model for generating Hebrew text A distilGPT2 sized model. <br> Training data was hewiki-20200701-pages-articles-multistream.xml.bz2 from https://dumps.wikimedia.org/hewiki/20200701/ <br> XML has been converted to plain text using Wikipedia Extractor http://medialab.di.unipi.it/wiki/Wikipedia_Extractor <br> I then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br>
0dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
Training procedure
1no_dataset_mention
Intended uses & limitations More information needed
1no_dataset_mention
🚀 Text Punctuator Based on Transformers model T5. T5 model fine-tuned for punctuation restoration. Model currently supports only French Language. More language supports will be added later using mT5. Train Datasets : Model trained using 2 french datasets (around 500k records): - [orange_sum](https://huggingface.co/datasets/orange_sum) - [mlsum](https://huggingface.co/datasets/mlsum) (only french text) More info will be added later.
0dataset_mention