repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
lmqg/mt5-base-dequad-qg
lmqg
mt5
20
189
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['de']
['lmqg/qg_dequad']
null
0
0
0
0
0
0
0
['question generation']
true
true
true
6,519
false
# Model Card of `lmqg/mt5-base-dequad-qg` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** de - **Training data:** [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="de", model="lmqg/mt5-base-dequad-qg") # model prediction questions = model.generate_q(list_context="das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen).", list_answer="1855") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-dequad-qg") output = pipe("Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls <hl> wird die Signalübertragung stark gedämpft. <hl>") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-dequad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_dequad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 80.39 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_1 | 10.85 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_2 | 4.61 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_3 | 2.06 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_4 | 0.87 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | METEOR | 13.65 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | MoverScore | 55.73 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | ROUGE_L | 11.1 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-base-dequad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_dequad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 90.63 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedF1Score (MoverScore) | 65.32 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (BERTScore) | 90.65 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (MoverScore) | 65.34 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (BERTScore) | 90.61 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (MoverScore) | 65.3 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-base-dequad-ae`](https://huggingface.co/lmqg/mt5-base-dequad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-base-dequad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_dequad.default.lmqg_mt5-base-dequad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 76.86 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedF1Score (MoverScore) | 52.96 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (BERTScore) | 76.28 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedPrecision (MoverScore) | 52.93 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (BERTScore) | 77.55 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | QAAlignedRecall (MoverScore) | 53.06 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_dequad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 17 - batch: 4 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-dequad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
4de004e2d0863e5beafbb7d59a3a40dd
Mirelle/t5-small-finetuned-ro-to-en
Mirelle
t5
12
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['wmt16']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,570
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-ro-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.5877 - Bleu: 13.4499 - Gen Len: 17.5073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.6167 | 0.05 | 2000 | 1.8649 | 9.7029 | 17.5753 | | 1.4551 | 0.1 | 4000 | 1.7810 | 10.6382 | 17.5358 | | 1.3723 | 0.16 | 6000 | 1.7369 | 11.1285 | 17.5158 | | 1.3373 | 0.21 | 8000 | 1.7086 | 11.6173 | 17.5013 | | 1.2935 | 0.26 | 10000 | 1.6890 | 12.0641 | 17.5038 | | 1.2632 | 0.31 | 12000 | 1.6670 | 12.3012 | 17.5253 | | 1.2463 | 0.37 | 14000 | 1.6556 | 12.3991 | 17.5153 | | 1.2272 | 0.42 | 16000 | 1.6442 | 12.7392 | 17.4732 | | 1.2052 | 0.47 | 18000 | 1.6328 | 12.8446 | 17.5143 | | 1.1985 | 0.52 | 20000 | 1.6233 | 13.0892 | 17.4807 | | 1.1821 | 0.58 | 22000 | 1.6153 | 13.1529 | 17.4952 | | 1.1791 | 0.63 | 24000 | 1.6079 | 13.2964 | 17.5088 | | 1.1698 | 0.68 | 26000 | 1.6038 | 13.3548 | 17.4842 | | 1.154 | 0.73 | 28000 | 1.5957 | 13.3012 | 17.5053 | | 1.1634 | 0.79 | 30000 | 1.5931 | 13.4203 | 17.5083 | | 1.1487 | 0.84 | 32000 | 1.5893 | 13.3959 | 17.5123 | | 1.1495 | 0.89 | 34000 | 1.5875 | 13.3745 | 17.4902 | | 1.1458 | 0.94 | 36000 | 1.5877 | 13.4129 | 17.5043 | | 1.1465 | 1.0 | 38000 | 1.5877 | 13.4499 | 17.5073 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
65b8db644725c27cb897b9fe58105171
google/multiberts-seed_0-step_60k
google
bert
8
15
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_60k']
false
true
true
3,515
false
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 60k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #0, captured at step 60k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_60k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_60k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
91c2d4e2078ff630fa229149d0079a86
jonfd/electra-base-igc-is
jonfd
electra
7
2
transformers
0
null
true
false
false
cc-by-4.0
['is']
['igc']
null
0
0
0
0
0
0
0
[]
false
true
true
607
false
# Icelandic ELECTRA-Base This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
60559a876ff24f32f3df16dfa8098623
kingery/hyc-06-512-sd15-2e-6-1500-man-ddim
kingery
null
24
5
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image']
false
true
true
1,590
false
### hyc-06-512-sd15-2e-6-1500-man-ddim on Stable Diffusion via Dreambooth #### model by kingery This your the Stable Diffusion model fine-tuned the hyc-06-512-sd15-2e-6-1500-man-ddim concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of yangguangkechuang man** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/kingery/hyc-06-512-sd15-2e-6-1500-man-ddim/resolve/main/concept_images/02.png) ![image 1](https://huggingface.co/kingery/hyc-06-512-sd15-2e-6-1500-man-ddim/resolve/main/concept_images/03.png) ![image 2](https://huggingface.co/kingery/hyc-06-512-sd15-2e-6-1500-man-ddim/resolve/main/concept_images/04.png) ![image 3](https://huggingface.co/kingery/hyc-06-512-sd15-2e-6-1500-man-ddim/resolve/main/concept_images/01.png) ![image 4](https://huggingface.co/kingery/hyc-06-512-sd15-2e-6-1500-man-ddim/resolve/main/concept_images/06.png) ![image 5](https://huggingface.co/kingery/hyc-06-512-sd15-2e-6-1500-man-ddim/resolve/main/concept_images/05.png)
c5bc3c056f2003e62d438f0ac7ee4d71
hassnain/wav2vec2-base-timit-demo-colab3
hassnain
wav2vec2
16
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,462
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1016 - Wer: 0.6704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0006 | 13.89 | 500 | 3.0706 | 1.0 | | 1.8796 | 27.78 | 1000 | 1.1154 | 0.7414 | | 0.548 | 41.67 | 1500 | 1.0826 | 0.7034 | | 0.2747 | 55.56 | 2000 | 1.1016 | 0.6704 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
7aadba30b8a9bb4f9dbe3869edbbc0a1