repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
osanseviero/corenlp_french | osanseviero | null | 3 | 0 | null | 0 | null | false | false | false | gpl | ['fr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['corenlp'] | false | true | true | 501 | false |
# Core NLP model for fr
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
| bda70e2a80c03ff01404617033a8d35c |
sshasnain/wav2vec2-xls-r-timit-trainer | sshasnain | wav2vec2 | 11 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,756 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-timit-trainer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1064
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5537 | 4.03 | 500 | 0.6078 | 1.0 |
| 0.5444 | 8.06 | 1000 | 0.4990 | 0.9994 |
| 0.3744 | 12.1 | 1500 | 0.5530 | 1.0 |
| 0.2863 | 16.13 | 2000 | 0.6401 | 1.0 |
| 0.2357 | 20.16 | 2500 | 0.6485 | 1.0 |
| 0.1933 | 24.19 | 3000 | 0.7448 | 0.9994 |
| 0.162 | 28.22 | 3500 | 0.7502 | 1.0 |
| 0.1325 | 32.26 | 4000 | 0.7801 | 1.0 |
| 0.1169 | 36.29 | 4500 | 0.8334 | 1.0 |
| 0.1031 | 40.32 | 5000 | 0.8269 | 1.0 |
| 0.0913 | 44.35 | 5500 | 0.8432 | 1.0 |
| 0.0793 | 48.39 | 6000 | 0.8738 | 1.0 |
| 0.0694 | 52.42 | 6500 | 0.8897 | 1.0 |
| 0.0613 | 56.45 | 7000 | 0.8966 | 1.0 |
| 0.0548 | 60.48 | 7500 | 0.9398 | 1.0 |
| 0.0444 | 64.51 | 8000 | 0.9548 | 1.0 |
| 0.0386 | 68.55 | 8500 | 0.9647 | 1.0 |
| 0.0359 | 72.58 | 9000 | 0.9901 | 1.0 |
| 0.0299 | 76.61 | 9500 | 1.0151 | 1.0 |
| 0.0259 | 80.64 | 10000 | 1.0526 | 1.0 |
| 0.022 | 84.67 | 10500 | 1.0754 | 1.0 |
| 0.0189 | 88.71 | 11000 | 1.0688 | 1.0 |
| 0.0161 | 92.74 | 11500 | 1.0914 | 1.0 |
| 0.0138 | 96.77 | 12000 | 1.1064 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| 1884781c3415381430d78030e542d8d9 |
pratikrelekar/fin_sentiment | pratikrelekar | distilbert | 13 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,200 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5739
- Accuracy: 0.7703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.5739 | 0.7703 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| fdd0a3182020a21821fc3a541fcc87fb |
google/bert_uncased_L-2_H-128_A-2 | google | bert | 6 | 1,260,933 | transformers | 15 | null | true | false | true | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 4,526 | false |
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
| 497ae5325775ec72ddff7ddcf91c099e |
michellehbn/brrrr | michellehbn | t5 | 10 | 5 | transformers | 0 | translation | true | true | true | apache-2.0 | ['en', 'fr', 'ro', 'de', 'multilingual'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'translation'] | false | true | true | 8,348 | false |
# Model Card for T5 Small
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Small is the checkpoint with 60 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-small, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5Model.from_pretrained("t5-small")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
| 12e9103ecfff795538d8fef3b278be04 |
baffo32/t5-base-ptmap | baffo32 | t5 | 10 | 12 | transformers | 0 | translation | true | true | true | apache-2.0 | ['en', 'fr', 'ro', 'de'] | ['c4'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'translation'] | false | true | true | 1,729 | false |
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
| 97d99e4cfeb799601875901db07e09aa |
jonatasgrosman/exp_w2v2t_pl_unispeech-ml_s463 | jonatasgrosman | unispeech | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pl'] | false | true | true | 500 | false | # exp_w2v2t_pl_unispeech-ml_s463
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 26f4b63426a6bb158e0b514697c43a4a |
fathyshalab/all-roberta-large-v1-utility-2-16-5 | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,512 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-utility-2-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3728
- Accuracy: 0.3956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8194 | 1.0 | 1 | 2.6027 | 0.3156 |
| 2.2337 | 2.0 | 2 | 2.5079 | 0.3778 |
| 1.7996 | 3.0 | 3 | 2.4293 | 0.3822 |
| 1.4591 | 4.0 | 4 | 2.3728 | 0.3956 |
| 1.3205 | 5.0 | 5 | 2.3439 | 0.3956 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| 09c4f1903f4e7f4aa225296a6d27104d |
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-BioNLP13 | StivenLancheros | roberta | 14 | 19 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,714 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-BioNLP13
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Precision: 0.7936
- Recall: 0.8067
- F1: 0.8001
- Accuracy: 0.9451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4206 | 1.0 | 692 | 0.2182 | 0.7513 | 0.7757 | 0.7633 | 0.9342 |
| 0.1872 | 2.0 | 1384 | 0.2032 | 0.7779 | 0.7865 | 0.7821 | 0.9398 |
| 0.0982 | 3.0 | 2076 | 0.2043 | 0.7995 | 0.7904 | 0.7949 | 0.9443 |
| 0.0735 | 4.0 | 2768 | 0.2217 | 0.7936 | 0.8067 | 0.8001 | 0.9451 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| a6ba855b9aac1a2778f63b9f63a27611 |
allenai/tk-instruct-3b-def | allenai | t5 | 11 | 696 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['Super-NaturalInstructions'] | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | [] | false | true | true | 5,602 | false |
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` | b7063896c1095a92090349b9378343db |
sd-concepts-library/4tnght | sd-concepts-library | null | 110 | 0 | null | 14 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 11,288 | false | ### 4tNGHT on Stable Diffusion
This is the `<4tNGHT>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<4tNGHT> 0](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/4.jpeg)
![<4tNGHT> 1](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/83.jpeg)
![<4tNGHT> 2](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/53.jpeg)
![<4tNGHT> 3](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/74.jpeg)
![<4tNGHT> 4](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/102.jpeg)
![<4tNGHT> 5](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/79.jpeg)
![<4tNGHT> 6](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/85.jpeg)
![<4tNGHT> 7](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/69.jpeg)
![<4tNGHT> 8](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/19.jpeg)
![<4tNGHT> 9](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/96.jpeg)
![<4tNGHT> 10](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/99.jpeg)
![<4tNGHT> 11](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/101.jpeg)
![<4tNGHT> 12](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/73.jpeg)
![<4tNGHT> 13](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/65.jpeg)
![<4tNGHT> 14](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/91.jpeg)
![<4tNGHT> 15](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/39.jpeg)
![<4tNGHT> 16](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/81.jpeg)
![<4tNGHT> 17](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/95.jpeg)
![<4tNGHT> 18](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/61.jpeg)
![<4tNGHT> 19](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/54.jpeg)
![<4tNGHT> 20](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/42.jpeg)
![<4tNGHT> 21](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/72.jpeg)
![<4tNGHT> 22](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/41.jpeg)
![<4tNGHT> 23](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/75.jpeg)
![<4tNGHT> 24](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/32.jpeg)
![<4tNGHT> 25](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/18.jpeg)
![<4tNGHT> 26](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/67.jpeg)
![<4tNGHT> 27](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/21.jpeg)
![<4tNGHT> 28](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/11.jpeg)
![<4tNGHT> 29](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/7.jpeg)
![<4tNGHT> 30](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/64.jpeg)
![<4tNGHT> 31](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/33.jpeg)
![<4tNGHT> 32](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/100.jpeg)
![<4tNGHT> 33](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/62.jpeg)
![<4tNGHT> 34](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/20.jpeg)
![<4tNGHT> 35](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/88.jpeg)
![<4tNGHT> 36](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/77.jpeg)
![<4tNGHT> 37](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/15.jpeg)
![<4tNGHT> 38](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/56.jpeg)
![<4tNGHT> 39](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/71.jpeg)
![<4tNGHT> 40](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/52.jpeg)
![<4tNGHT> 41](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/6.jpeg)
![<4tNGHT> 42](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/87.jpeg)
![<4tNGHT> 43](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/76.jpeg)
![<4tNGHT> 44](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/34.jpeg)
![<4tNGHT> 45](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/9.jpeg)
![<4tNGHT> 46](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/30.jpeg)
![<4tNGHT> 47](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/59.jpeg)
![<4tNGHT> 48](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/36.jpeg)
![<4tNGHT> 49](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/93.jpeg)
![<4tNGHT> 50](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/14.jpeg)
![<4tNGHT> 51](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/12.jpeg)
![<4tNGHT> 52](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/55.jpeg)
![<4tNGHT> 53](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/17.jpeg)
![<4tNGHT> 54](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/57.jpeg)
![<4tNGHT> 55](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/82.jpeg)
![<4tNGHT> 56](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/103.jpeg)
![<4tNGHT> 57](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/45.jpeg)
![<4tNGHT> 58](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/47.jpeg)
![<4tNGHT> 59](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/43.jpeg)
![<4tNGHT> 60](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/94.jpeg)
![<4tNGHT> 61](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/84.jpeg)
![<4tNGHT> 62](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/25.jpeg)
![<4tNGHT> 63](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/68.jpeg)
![<4tNGHT> 64](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/2.jpeg)
![<4tNGHT> 65](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/90.jpeg)
![<4tNGHT> 66](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/80.jpeg)
![<4tNGHT> 67](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/8.jpeg)
![<4tNGHT> 68](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/38.jpeg)
![<4tNGHT> 69](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/13.jpeg)
![<4tNGHT> 70](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/27.jpeg)
![<4tNGHT> 71](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/37.jpeg)
![<4tNGHT> 72](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/24.jpeg)
![<4tNGHT> 73](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/92.jpeg)
![<4tNGHT> 74](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/35.jpeg)
![<4tNGHT> 75](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/3.jpeg)
![<4tNGHT> 76](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/51.jpeg)
![<4tNGHT> 77](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/89.jpeg)
![<4tNGHT> 78](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/40.jpeg)
![<4tNGHT> 79](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/22.jpeg)
![<4tNGHT> 80](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/63.jpeg)
![<4tNGHT> 81](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/98.jpeg)
![<4tNGHT> 82](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/97.jpeg)
![<4tNGHT> 83](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/10.jpeg)
![<4tNGHT> 84](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/70.jpeg)
![<4tNGHT> 85](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/46.jpeg)
![<4tNGHT> 86](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/23.jpeg)
![<4tNGHT> 87](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/104.jpeg)
![<4tNGHT> 88](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/29.jpeg)
![<4tNGHT> 89](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/28.jpeg)
![<4tNGHT> 90](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/44.jpeg)
![<4tNGHT> 91](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/78.jpeg)
![<4tNGHT> 92](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/1.jpeg)
![<4tNGHT> 93](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/86.jpeg)
![<4tNGHT> 94](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/16.jpeg)
![<4tNGHT> 95](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/60.jpeg)
![<4tNGHT> 96](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/58.jpeg)
![<4tNGHT> 97](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/5.jpeg)
![<4tNGHT> 98](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/49.jpeg)
![<4tNGHT> 99](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/26.jpeg)
![<4tNGHT> 100](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/0.jpeg)
![<4tNGHT> 101](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/50.jpeg)
![<4tNGHT> 102](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/48.jpeg)
![<4tNGHT> 103](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/66.jpeg)
![<4tNGHT> 104](https://huggingface.co/sd-concepts-library/4tnght/resolve/main/concept_images/31.jpeg)
| 58a7e3d55f49d796ae8bde40a7caa5af |
jonatasgrosman/whisper-small-pt-cv11-v5 | jonatasgrosman | whisper | 39 | 14 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,787 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Portuguese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 pt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3056
- Wer: 14.6841
- Cer: 5.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
| 0.2817 | 0.92 | 500 | 0.3352 | 15.9476 | 6.3609 |
| 0.2245 | 1.84 | 1000 | 0.3047 | 15.0231 | 5.9326 |
| 0.1587 | 2.76 | 1500 | 0.2985 | 15.0847 | 5.9326 |
| 0.1181 | 3.68 | 2000 | 0.3056 | 14.6841 | 5.8856 |
| 0.0741 | 4.6 | 2500 | 0.3162 | 14.9923 | 5.9906 |
| 0.0438 | 5.52 | 3000 | 0.3466 | 15.4700 | 6.2255 |
| 0.0294 | 6.45 | 3500 | 0.3799 | 15.2234 | 6.1647 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| f399da3479a30a7fcea8851b244e5223 |
DunnBC22/vit-base-patch16-224-in21k_car_or_motorcycle | DunnBC22 | vit | 14 | 4 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,605 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k_car_or_motorcycle
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0301
- Accuracy: 0.9938
- Weighted f1: 0.9939
- Weighted recall: 0.9927
- Weighted precision: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|
| 0.6908 | 1.0 | 200 | 0.0372 | 0.99 | 0.9902 | 0.9902 | 0.9902 |
| 0.6908 | 2.0 | 400 | 0.0301 | 0.9938 | 0.9939 | 0.9927 | 0.9951 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1
| 0afdbc21bd0fdd767ba2080f441c94cc |
enoriega/rule_learning_margin_1mm | enoriega | bert | 18 | 0 | transformers | 0 | null | true | false | false | apache-2.0 | null | ['enoriega/odinsynth_dataset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,524 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3806
- Margin Accuracy: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.6482 | 0.16 | 20 | 0.6494 | 0.7263 |
| 0.5151 | 0.32 | 40 | 0.5088 | 0.7792 |
| 0.4822 | 0.48 | 60 | 0.4429 | 0.8045 |
| 0.4472 | 0.64 | 80 | 0.4265 | 0.8107 |
| 0.4352 | 0.8 | 100 | 0.4155 | 0.8132 |
| 0.4335 | 0.96 | 120 | 0.4128 | 0.8116 |
| 0.4113 | 1.12 | 140 | 0.4119 | 0.8142 |
| 0.4186 | 1.28 | 160 | 0.4075 | 0.8120 |
| 0.42 | 1.44 | 180 | 0.4072 | 0.8123 |
| 0.4175 | 1.6 | 200 | 0.4080 | 0.8130 |
| 0.4097 | 1.76 | 220 | 0.4031 | 0.8128 |
| 0.397 | 1.92 | 240 | 0.4004 | 0.8130 |
| 0.4115 | 2.08 | 260 | 0.3979 | 0.8136 |
| 0.4108 | 2.24 | 280 | 0.3940 | 0.8167 |
| 0.4125 | 2.4 | 300 | 0.3879 | 0.8218 |
| 0.4117 | 2.56 | 320 | 0.3848 | 0.8217 |
| 0.3967 | 2.72 | 340 | 0.3818 | 0.8231 |
| 0.3947 | 2.88 | 360 | 0.3813 | 0.8240 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
| 6de3ebda22db86ba93f382779eea40d5 |
keogh/stable-diffusion-compvis-v1-4 | keogh | null | 23 | 10 | diffusers | 2 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | true | true | 15,624 | false |
# Stable Diffusion v1-4 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
### PyTorch
```bash
pip install --upgrade diffusers transformers scipy
```
Run this command to log in with your HF Hub token if you haven't before:
```bash
huggingface-cli login
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to(device)
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-4"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### JAX/Flax
To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax.
Running the pipeline with default PNDMScheduler
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, 8)
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, 8)
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:
![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg)
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 86486e95a6c696c91515a757bf33a80d |
laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K | laion | null | 10 | 1,594 | open_clip | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 12,524 | false | # Model Card for CLIP-convnext_base_w.laion_aesthetic-s13B-b82k
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 |
| [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 |
| [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS.
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--warmup 10000 \
--batch-size=512 \
--epochs=64 \
--dataset-resampled \
--clip-grad-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--model "convnext_base_w" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k.
![](convnext_base_w_zero_shot.png)
An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
``` | 1bf584517d50f18f08699d9f1196f8aa |
domenicrosati/SPECTER-finetuned-DAGPap22 | domenicrosati | bert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'generated_from_trainer'] | true | true | true | 2,676 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-finetuned-DAGPap22
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Accuracy: 0.9993
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.3422 | 1.0 | 669 | 0.4135 | 0.8914 | 0.9140 |
| 0.1074 | 2.0 | 1338 | 0.1216 | 0.9746 | 0.9811 |
| 0.0329 | 3.0 | 2007 | 0.0064 | 0.9989 | 0.9992 |
| 0.0097 | 4.0 | 2676 | 0.0132 | 0.9972 | 0.9980 |
| 0.0123 | 5.0 | 3345 | 0.0231 | 0.9961 | 0.9971 |
| 0.0114 | 6.0 | 4014 | 0.0080 | 0.9985 | 0.9989 |
| 0.0029 | 7.0 | 4683 | 0.2207 | 0.9727 | 0.9797 |
| 0.0075 | 8.0 | 5352 | 0.0145 | 0.9974 | 0.9981 |
| 0.0098 | 9.0 | 6021 | 0.0047 | 0.9994 | 0.9996 |
| 0.0025 | 10.0 | 6690 | 0.0000 | 1.0 | 1.0 |
| 0.0044 | 11.0 | 7359 | 0.0035 | 0.9993 | 0.9995 |
| 0.0 | 12.0 | 8028 | 0.0027 | 0.9996 | 0.9997 |
| 0.0027 | 13.0 | 8697 | 0.0036 | 0.9993 | 0.9995 |
| 0.0055 | 14.0 | 9366 | 0.0017 | 0.9998 | 0.9999 |
| 0.0 | 15.0 | 10035 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 16.0 | 10704 | 0.0000 | 1.0 | 1.0 |
| 0.0022 | 17.0 | 11373 | 0.0111 | 0.9981 | 0.9986 |
| 0.0004 | 18.0 | 12042 | 0.0011 | 0.9994 | 0.9996 |
| 0.0 | 19.0 | 12711 | 0.0020 | 0.9994 | 0.9996 |
| 0.0 | 20.0 | 13380 | 0.0023 | 0.9993 | 0.9995 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 307603d13b6aadb001958b97a22836c6 |
pig4431/Sentiment140_roBERTa_5E | pig4431 | roberta | 11 | 4 | transformers | 0 | text-classification | true | false | false | mit | null | ['sentiment140'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,973 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment140_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the sentiment140 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4796
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.699 | 0.08 | 50 | 0.6734 | 0.5467 |
| 0.6099 | 0.16 | 100 | 0.4322 | 0.8 |
| 0.4906 | 0.24 | 150 | 0.3861 | 0.84 |
| 0.4652 | 0.32 | 200 | 0.4288 | 0.7933 |
| 0.4874 | 0.4 | 250 | 0.3872 | 0.84 |
| 0.4735 | 0.48 | 300 | 0.3401 | 0.8667 |
| 0.3909 | 0.56 | 350 | 0.3484 | 0.84 |
| 0.4277 | 0.64 | 400 | 0.3207 | 0.88 |
| 0.3894 | 0.72 | 450 | 0.3310 | 0.8733 |
| 0.4523 | 0.8 | 500 | 0.3389 | 0.8667 |
| 0.4087 | 0.88 | 550 | 0.3515 | 0.8467 |
| 0.3973 | 0.96 | 600 | 0.3513 | 0.8467 |
| 0.4016 | 1.04 | 650 | 0.3501 | 0.8667 |
| 0.3613 | 1.12 | 700 | 0.3327 | 0.8667 |
| 0.343 | 1.2 | 750 | 0.3518 | 0.86 |
| 0.314 | 1.28 | 800 | 0.3555 | 0.88 |
| 0.3407 | 1.36 | 850 | 0.3849 | 0.86 |
| 0.2944 | 1.44 | 900 | 0.3576 | 0.8667 |
| 0.3267 | 1.52 | 950 | 0.3461 | 0.8733 |
| 0.3251 | 1.6 | 1000 | 0.3411 | 0.8667 |
| 0.321 | 1.68 | 1050 | 0.3371 | 0.88 |
| 0.3057 | 1.76 | 1100 | 0.3322 | 0.88 |
| 0.3335 | 1.84 | 1150 | 0.3106 | 0.8667 |
| 0.3363 | 1.92 | 1200 | 0.3158 | 0.8933 |
| 0.2972 | 2.0 | 1250 | 0.3122 | 0.88 |
| 0.2453 | 2.08 | 1300 | 0.3327 | 0.8867 |
| 0.2467 | 2.16 | 1350 | 0.3767 | 0.8667 |
| 0.273 | 2.24 | 1400 | 0.3549 | 0.8667 |
| 0.2672 | 2.32 | 1450 | 0.3470 | 0.88 |
| 0.2352 | 2.4 | 1500 | 0.4092 | 0.8667 |
| 0.2763 | 2.48 | 1550 | 0.3472 | 0.9 |
| 0.2858 | 2.56 | 1600 | 0.3440 | 0.9 |
| 0.2206 | 2.64 | 1650 | 0.3770 | 0.88 |
| 0.2928 | 2.72 | 1700 | 0.3280 | 0.8867 |
| 0.2478 | 2.8 | 1750 | 0.3426 | 0.8867 |
| 0.2362 | 2.88 | 1800 | 0.3578 | 0.8933 |
| 0.2107 | 2.96 | 1850 | 0.3986 | 0.8933 |
| 0.2191 | 3.04 | 1900 | 0.3819 | 0.8933 |
| 0.2267 | 3.12 | 1950 | 0.4047 | 0.8867 |
| 0.2076 | 3.2 | 2000 | 0.4303 | 0.8867 |
| 0.1868 | 3.28 | 2050 | 0.4385 | 0.8933 |
| 0.2239 | 3.36 | 2100 | 0.4175 | 0.8933 |
| 0.2082 | 3.44 | 2150 | 0.4142 | 0.8933 |
| 0.2423 | 3.52 | 2200 | 0.4002 | 0.8867 |
| 0.1878 | 3.6 | 2250 | 0.4662 | 0.88 |
| 0.1892 | 3.68 | 2300 | 0.4783 | 0.88 |
| 0.2259 | 3.76 | 2350 | 0.4487 | 0.88 |
| 0.1859 | 3.84 | 2400 | 0.4456 | 0.8933 |
| 0.2042 | 3.92 | 2450 | 0.4468 | 0.8933 |
| 0.2096 | 4.0 | 2500 | 0.4153 | 0.8867 |
| 0.178 | 4.08 | 2550 | 0.4100 | 0.8933 |
| 0.1621 | 4.16 | 2600 | 0.4292 | 0.8933 |
| 0.1682 | 4.24 | 2650 | 0.4602 | 0.8933 |
| 0.1813 | 4.32 | 2700 | 0.4680 | 0.8933 |
| 0.2033 | 4.4 | 2750 | 0.4735 | 0.8933 |
| 0.1662 | 4.48 | 2800 | 0.4750 | 0.88 |
| 0.1686 | 4.56 | 2850 | 0.4830 | 0.8933 |
| 0.1603 | 4.64 | 2900 | 0.4909 | 0.8933 |
| 0.148 | 4.72 | 2950 | 0.4784 | 0.8933 |
| 0.162 | 4.8 | 3000 | 0.4750 | 0.8867 |
| 0.153 | 4.88 | 3050 | 0.4759 | 0.8867 |
| 0.1657 | 4.96 | 3100 | 0.4796 | 0.8933 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
| 0141b2b3e70f618e437921d33e22f852 |
abdoutony207/m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1 | abdoutony207 | m2m_100 | 12 | 2 | transformers | 0 | text2text-generation | true | false | false | mit | null | ['opus100'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,952 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize16-20epoch-1
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3640
- Bleu: 13.1835
- Meteor: 0.1189
- Gen Len: 17.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 6.1776 | 1.0 | 100 | 3.8904 | 10.5866 | 0.0995 | 16.64 |
| 2.4531 | 2.0 | 200 | 1.0928 | 12.3452 | 0.1108 | 17.0575 |
| 0.512 | 3.0 | 300 | 0.3625 | 10.5224 | 0.0982 | 17.2575 |
| 0.1924 | 4.0 | 400 | 0.3342 | 12.4242 | 0.1098 | 16.6325 |
| 0.1227 | 5.0 | 500 | 0.3403 | 13.0526 | 0.1185 | 17.3475 |
| 0.0889 | 6.0 | 600 | 0.3481 | 13.1323 | 0.1133 | 17.815 |
| 0.0651 | 7.0 | 700 | 0.3601 | 12.6684 | 0.1133 | 17.3525 |
| 0.0533 | 8.0 | 800 | 0.3640 | 13.1835 | 0.1189 | 17.72 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| e8dce3d56b1022341c892a0a0c6c1928 |
anuragshas/whisper-large-v2-mt | anuragshas | whisper | 23 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['mt'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Maltese
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 mt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3616
- Wer: 18.4644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0023 | 9.0 | 1000 | 0.3616 | 18.4644 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 5948fe588b7c398de182fcfb4cf4c38a |
facebook/data2vec-audio-base-960h | facebook | data2vec-audio | 8 | 6,062 | transformers | 6 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['librispeech_asr'] | null | 1 | 1 | 0 | 0 | 1 | 1 | 0 | ['speech', 'hf-asr-leaderboard'] | true | true | true | 3,870 | false |
# Data2Vec-Audio-Base-960h
[Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2202.03555)
Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli
**Abstract**
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
# Pre-Training method
![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png)
For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/data2vec-audio-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from transformers import Wav2Vec2Processor, Data2VecForCTC
from datasets import load_dataset
import torch
from jiwer import wer
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h").to("cuda")
model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.77 | 7.08 | | 9f5d83487cc7c19da1ffff60dbad6890 |
Lilya/distilbert-base-uncased-ner-invoiceSenderName_all_inv_20_12 | Lilya | distilbert | 10 | 4 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,216 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner-invoiceSenderName_all_inv_20_12
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0145
- eval_precision: 0.0
- eval_recall: 0.0
- eval_f1: 0.0
- eval_accuracy: 0.9957
- eval_runtime: 511.2392
- eval_samples_per_second: 42.113
- eval_steps_per_second: 2.633
- epoch: 4.0
- step: 30500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0
- Datasets 2.3.2
- Tokenizers 0.10.3
| 45e663225ca566c9125ca98f805a41bf |
yuzhi/distilbert-imdb | yuzhi | distilbert | 18 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,204 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1819
- Accuracy: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2723 | 1.0 | 782 | 0.1819 | 0.9289 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
| 6824ef845370a6804aab17560b2576cb |
model-attribution-challenge/bert-base-uncased | model-attribution-challenge | bert | 11 | 5 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 10,426 | false |
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| 7bf215fb8c9eca61bf60cd2bfa8e42eb |
LYTinn/gpt2-finetuning-sentiment-model-3000-samples | LYTinn | gpt2 | 10 | 30 | transformers | 0 | text-classification | true | false | false | mit | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,020 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7442
- Accuracy: 0.6467
- F1: 0.6159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.0
- Pytorch 1.12.1+cu116
- Datasets 2.5.2
- Tokenizers 0.13.1
| c61612f05b6dda258e61ccad67cf6f7d |
KoichiYasuoka/roberta-small-japanese-aozora-char | KoichiYasuoka | roberta | 8 | 4 | transformers | 0 | fill-mask | true | false | false | cc-by-sa-4.0 | ['ja'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['japanese', 'masked-lm'] | false | true | true | 617 | false |
# roberta-small-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-small-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-char-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
```
| 580c9e09cc40e2bbe67d0033c804002e |
Helsinki-NLP/opus-mt-lua-fi | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-lua-fi
* source languages: lua
* target languages: fi
* OPUS readme: [lua-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.fi | 23.5 | 0.450 |
| b2d6a1846487e07c14f0dede2e2df3a7 |
jamiehuang/t5-small-finetuned-xsum | jamiehuang | t5 | 14 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['eli5'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,417 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6746
- Rouge1: 13.2962
- Rouge2: 2.0081
- Rougel: 10.6529
- Rougelsum: 12.049
- Gen Len: 18.9985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.8901 | 1.0 | 17040 | 3.6746 | 13.2962 | 2.0081 | 10.6529 | 12.049 | 18.9985 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| aa3ce4b29c445f26277ccf744d34dae5 |
Okyx/fillmaskmodel | Okyx | xlm-roberta | 7 | 4 | transformers | 0 | fill-mask | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,311 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fillmaskmodel
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Tokenizers 0.12.1
| 83b91ee454696c1ff16f355f69374a1e |
anas-awadalla/roberta-large-initialization-seed-4 | anas-awadalla | roberta | 11 | 3 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,032 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-initialization-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| ccdd24529ca177e2ba2fbfb2bed16d03 |
ckiplab/bert-base-han-chinese-pos-xiandai | ckiplab | bert | 8 | 7 | transformers | 0 | token-classification | true | false | false | gpl-3.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'token-classification', 'bert', 'zh'] | false | true | true | 746 | false |
# CKIP BERT Base Han Chinese POS
This model provides part-of-speech (POS) tagging for the ancient Chinese language. Our training dataset covers four eras of the Chinese language.
## Homepage
* [ckiplab/han-transformers](https://github.com/ckiplab/han-transformers)
## Training Datasets
The copyright of the datasets belongs to the Institute of Linguistics, Academia Sinica.
* [中央研究院上古漢語標記語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/akiwi/kiwi.sh)
* [中央研究院中古漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/dkiwi/kiwi.sh)
* [中央研究院近代漢語語料庫](http://lingcorpus.iis.sinica.edu.tw/cgi-bin/kiwi/pkiwi/kiwi.sh)
* [中央研究院現代漢語語料庫](http://asbc.iis.sinica.edu.tw)
## Contributors
* Chin-Tung Lin at [CKIP](https://ckip.iis.sinica.edu.tw/) | 3056151e19ea1ed005ceb5891c68f116 |
JBNLRY/distilbert-base-uncased-finetuned-cola | JBNLRY | distilbert | 13 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5432 | 0.4243 |
| 0.3447 | 2.0 | 1070 | 0.4968 | 0.5187 |
| 0.2347 | 3.0 | 1605 | 0.6540 | 0.5280 |
| 0.1747 | 4.0 | 2140 | 0.7547 | 0.5367 |
| 0.1255 | 5.0 | 2675 | 0.8366 | 0.5472 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 9226513599c9a701754eb325406695b3 |
hisaoka/bart-large-cnn_dataset_radiology_summary20221129.tsv | hisaoka | bart | 11 | 3 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,060 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn_dataset_radiology_summary20221129.tsv
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 8d174b990800f5c5bc0f25b5dd1b17be |
raisvaza/distilbert-base-uncased-finetuned-ner | raisvaza | distilbert | 10 | 3 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,853 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# raisvaza/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0352
- Validation Loss: 0.0607
- Train Precision: 0.9246
- Train Recall: 0.9330
- Train F1: 0.9288
- Train Accuracy: 0.9832
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1955 | 0.0720 | 0.8998 | 0.9157 | 0.9077 | 0.9792 | 0 |
| 0.0557 | 0.0620 | 0.9200 | 0.9271 | 0.9235 | 0.9822 | 1 |
| 0.0352 | 0.0607 | 0.9246 | 0.9330 | 0.9288 | 0.9832 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
| bd015bc605adcffe449b3dedfabdfaaa |
kadirnar/yolov8l-v8.0 | kadirnar | null | 3 | 0 | null | 1 | object-detection | false | false | false | gpl-3.0 | null | ['detection-datasets/coco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'computer-vision', 'yolov8', 'yolov5'] | false | true | true | 457 | false |
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite]
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('kadirnar/yolov8l-v8.0')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
```
### BibTeX Entry and Citation Info
```
``` | cc19e299c1e13ebe5d0593dee075efaa |
CalvinHuang/mt5-small-finetuned-amazon-en-es | CalvinHuang | mt5 | 13 | 6 | transformers | 1 | summarization | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,997 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0393
- Rouge1: 17.2936
- Rouge2: 8.0678
- Rougel: 16.8129
- Rougelsum: 16.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.912 | 5.595 | 13.2984 | 13.4171 |
| 3.8961 | 2.0 | 2418 | 3.1711 | 16.2845 | 8.6033 | 15.5509 | 15.7383 |
| 3.5801 | 3.0 | 3627 | 3.0917 | 17.316 | 8.122 | 16.697 | 16.773 |
| 3.4258 | 4.0 | 4836 | 3.0583 | 16.1347 | 7.7829 | 15.6475 | 15.7804 |
| 3.3154 | 5.0 | 6045 | 3.0573 | 17.5918 | 8.7349 | 17.0537 | 17.2216 |
| 3.2438 | 6.0 | 7254 | 3.0479 | 17.2294 | 8.0383 | 16.8141 | 16.9858 |
| 3.2024 | 7.0 | 8463 | 3.0377 | 17.2918 | 8.139 | 16.8178 | 16.9671 |
| 3.1745 | 8.0 | 9672 | 3.0393 | 17.2936 | 8.0678 | 16.8129 | 16.9991 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| 232d6463c1be530ef13f8eccfe8a9e9b |
microsoft/xclip-base-patch16-hmdb-8-shot | microsoft | xclip | 10 | 5 | transformers | 0 | feature-extraction | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'video-classification'] | true | true | true | 2,425 | false |
# X-CLIP (base-sized model)
X-CLIP model (base-sized, patch resolution of 16) trained in a few-shot fashion (K=8) on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP).
This model was trained using 32 frames per video, at a resolution of 224x224.
Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
X-CLIP is a minimal extension of [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs.
![X-CLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/xclip_architecture.png)
This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval.
## Intended uses & limitations
You can use the raw model for determining how well text goes with a given video. See the [model hub](https://huggingface.co/models?search=microsoft/xclip) to look for
fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/xclip.html#).
## Training data
This model was trained on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/).
### Preprocessing
The exact details of preprocessing during training can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L247).
The exact details of preprocessing during validation can be found [here](https://github.com/microsoft/VideoX/blob/40f6d177e0a057a50ac69ac1de6b5938fd268601/X-CLIP/datasets/build.py#L285).
During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
## Evaluation results
This model achieves a top-1 accuracy of 62.8%.
| de025dbfb09c06b97561cb4f410f41cd |
sd-concepts-library/turtlepics | sd-concepts-library | null | 10 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,049 | false | ### turtlepics on Stable Diffusion
This is the `<henry-leonardi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<henry-leonardi> 0](https://huggingface.co/sd-concepts-library/turtlepics/resolve/main/concept_images/0.jpeg)
![<henry-leonardi> 1](https://huggingface.co/sd-concepts-library/turtlepics/resolve/main/concept_images/1.jpeg)
![<henry-leonardi> 2](https://huggingface.co/sd-concepts-library/turtlepics/resolve/main/concept_images/2.jpeg)
![<henry-leonardi> 3](https://huggingface.co/sd-concepts-library/turtlepics/resolve/main/concept_images/3.jpeg)
| d2921f06f6ac257e42521c548bb4d275 |
uzn/ddpm-trucks | uzn | null | 16 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['uzn/truck'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,179 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-trucks
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `uzn/truck` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/uzn/ddpm-trucks/tensorboard?#scalars)
| 4ecdfe725df16923f67db627ebb8e56d |
pyordii/distilgpt2-finetuned-AT | pyordii | gpt2 | 17 | 5 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,593 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-AT
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 279 | 3.3451 |
| 3.4534 | 2.0 | 558 | 3.2941 |
| 3.4534 | 3.0 | 837 | 3.2740 |
| 3.2435 | 4.0 | 1116 | 3.2617 |
| 3.2435 | 5.0 | 1395 | 3.2556 |
| 3.1729 | 6.0 | 1674 | 3.2490 |
| 3.1729 | 7.0 | 1953 | 3.2475 |
| 3.1262 | 8.0 | 2232 | 3.2467 |
| 3.0972 | 9.0 | 2511 | 3.2448 |
| 3.0972 | 10.0 | 2790 | 3.2450 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 3110b49a3d1e68a416874dc1e7f96f2f |
IsaacZhy/bert-base-goemotions | IsaacZhy | bert | 14 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['go_emotions'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,034 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-goemotions
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1539
- F1: 0.5727
- Roc Auc: 0.7796
- Accuracy: 0.4375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.0833 | 1.0 | 2714 | 0.0876 | 0.5453 | 0.7189 | 0.4243 |
| 0.0719 | 2.0 | 5428 | 0.0867 | 0.5586 | 0.7322 | 0.4399 |
| 0.0575 | 3.0 | 8142 | 0.0943 | 0.5736 | 0.7523 | 0.4665 |
| 0.0411 | 4.0 | 10856 | 0.1064 | 0.5655 | 0.7580 | 0.4574 |
| 0.0301 | 5.0 | 13570 | 0.1167 | 0.5622 | 0.7591 | 0.4517 |
| 0.0217 | 6.0 | 16284 | 0.1279 | 0.5579 | 0.7648 | 0.4375 |
| 0.015 | 7.0 | 18998 | 0.1367 | 0.5663 | 0.7759 | 0.4333 |
| 0.0102 | 8.0 | 21712 | 0.1445 | 0.5695 | 0.7793 | 0.4322 |
| 0.0077 | 9.0 | 24426 | 0.1491 | 0.5725 | 0.7795 | 0.4366 |
| 0.0057 | 10.0 | 27140 | 0.1539 | 0.5727 | 0.7796 | 0.4375 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 7b3f0cdd96b11d34a66e255859a5aab8 |
harithapliyal/distilbert-base-uncased-finetuned-cola | harithapliyal | distilbert | 16 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8366
- Matthews Correlation: 0.5602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5257 | 1.0 | 535 | 0.5475 | 0.4039 |
| 0.3482 | 2.0 | 1070 | 0.5140 | 0.5004 |
| 0.2408 | 3.0 | 1605 | 0.6472 | 0.5264 |
| 0.1765 | 4.0 | 2140 | 0.7456 | 0.5403 |
| 0.1314 | 5.0 | 2675 | 0.8366 | 0.5602 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 3fefffb9b8a2c5c0e95f610b6ecdae85 |
Helsinki-NLP/opus-mt-de-fj | Helsinki-NLP | marian | 10 | 11 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-de-fj
* source languages: de
* target languages: fj
* OPUS readme: [de-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.fj | 24.6 | 0.470 |
| a1bc978538508ba68587a5e3b6a1de4a |
Aayesha/t5-end2end-questions-generation | Aayesha | t5 | 6 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad_modified_for_t5_qg'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,269 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.609 | 0.34 | 100 | 1.9542 |
| 2.0336 | 0.68 | 200 | 1.8015 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 7579ef23a31491d6b2e691ca41f25583 |
paola-md/distilr2-lr1e05-wd0.1-bs64 | paola-md | roberta | 6 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilr2-lr1e05-wd0.1-bs64
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2723
- Rmse: 0.5219
- Mse: 0.2723
- Mae: 0.4098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2779 | 1.0 | 312 | 0.2756 | 0.5250 | 0.2756 | 0.4246 |
| 0.2747 | 2.0 | 624 | 0.2734 | 0.5229 | 0.2734 | 0.4091 |
| 0.2732 | 3.0 | 936 | 0.2726 | 0.5221 | 0.2726 | 0.4155 |
| 0.2714 | 4.0 | 1248 | 0.2723 | 0.5219 | 0.2723 | 0.4098 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
| f27fa515a7a3ae1fef4d2e2edcc4e7fe |
Goutham-Vignesh/flan-t5-gov-report-sum | Goutham-Vignesh | t5 | 20 | 13 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['govreport-summarization'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,808 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-gov-report-sum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the govreport-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2385
- Rouge1: 5.8729
- Rouge2: 3.0763
- Rougel: 5.1016
- Rougelsum: 5.646
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5801 | 1.0 | 2190 | 2.3211 | 5.6226 | 2.9142 | 4.9535 | 5.417 | 19.0 |
| 2.5125 | 2.0 | 4380 | 2.2748 | 5.7982 | 3.0365 | 5.0726 | 5.5837 | 19.0 |
| 2.453 | 3.0 | 6570 | 2.2545 | 5.8744 | 3.0997 | 5.1196 | 5.6524 | 19.0 |
| 2.436 | 4.0 | 8760 | 2.2430 | 5.8669 | 3.0525 | 5.0849 | 5.631 | 19.0 |
| 2.4144 | 5.0 | 10950 | 2.2385 | 5.8729 | 3.0763 | 5.1016 | 5.646 | 19.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
| 52b7b93add955442f66a8bbf31dcf4a7 |
mgoudarz/xlm-roberta-base-finetuned-panx-all | mgoudarz | xlm-roberta | 9 | 11 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1757
- F1: 0.8513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2986 | 1.0 | 835 | 0.1939 | 0.8077 |
| 0.1547 | 2.0 | 1670 | 0.1813 | 0.8351 |
| 0.1003 | 3.0 | 2505 | 0.1757 | 0.8513 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| c27182b10f557865a5909eacbabd9abe |
ConvLab/t5-small-nlg-multiwoz21 | ConvLab | t5 | 7 | 250 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['ConvLab/multiwoz21'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['t5-small', 'text2text-generation', 'natural language generation', 'conversational system', 'task-oriented dialog'] | true | true | true | 706 | false |
# t5-small-nlg-multiwoz21
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21).
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| 21cf9fdc2d3d7b2de6abd1e6594e8cec |
Helsinki-NLP/opus-mt-en-bcl | Helsinki-NLP | marian | 10 | 60 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 797 | false |
### opus-mt-en-bcl
* source languages: en
* target languages: bcl
* OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip)
* test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt)
* test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bcl | 54.3 | 0.722 |
| 8441c7d833ee6a9647cab71909448724 |
gchhablani/fnet-large-finetuned-rte | gchhablani | fnet | 45 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,328 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-rte
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.6426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7105 | 1.0 | 623 | 0.6887 | 0.5740 |
| 0.6714 | 2.0 | 1246 | 0.6742 | 0.6209 |
| 0.509 | 3.0 | 1869 | 0.7528 | 0.6426 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| c6923489cd7777744dd6a8927b659070 |
lora-library/a-photo-of-simbatheog | lora-library | null | 29 | 0 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora'] | false | true | true | 538 | false | # LoRA DreamBooth - a-photo-of-simbatheog
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "simbatheog" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: A photo of simbatheog in a bucket
![image_0](test_images/image_0.png)
![image_1](test_images/image_1.png)
![image_2](test_images/image_2.png)
![image_3](test_images/image_3.png)
| 82db4456e4c5616c2fd60bb65aa74a33 |
Hate-speech-CNERG/kannada-codemixed-abusive-MuRIL | Hate-speech-CNERG | bert | 7 | 3 | transformers | 0 | text-classification | true | false | false | afl-3.0 | ['ka-en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 945 | false |
This model is used to detect **abusive speech** in **Code-Mixed Kannada**. It is finetuned on MuRIL model using Code-Mixed Kannada abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~ | 0676fdea764d9dcb9279028612113e09 |
SebastianKamphYT/VintageCream | SebastianKamphYT | null | 5 | 0 | null | 13 | null | false | false | false | cc0-1.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,115 | false |
![VntgCrm_example_grid.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670833927638-6334a32686c3fdcdc7adf4c0.jpeg)
[![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://www.patreon.com/sebastiankamph)
# Vintage cream photo film
Based on SD 2.1
768x768
**Token word: vntgcrm style**
**Example prompt to start out with**
RAW candid cinema, woman portrait, vntgcrm style, 16mm, ((remarkable color)), (ultra realistic)
Negative: ugly, disfigured, deformed, too many hands, makeup, cartoon, render
**Support my work on Patreon for Early access model releases**
https://www.patreon.com/sebastiankamph
**AI Art, Stable diffusion guides and tutorials on Youtube**
https://www.youtube.com/@sebastiankamph
**Chat in our community discord**
https://discord.com/invite/dFB7zuXyFY
**Installation**
Download the .ckpt and the .yaml file.
Put them inside \stable-diffusion-webui\Models\Stable-diffusion\
https://huggingface.co/SebastianKamphYT/VintageCream/blob/main/VintageCream.ckpt
https://huggingface.co/SebastianKamphYT/VintageCream/blob/main/VintageCream.yaml
| 1a0fe5216deeb35546aa4e12220e1fbb |
NandaKr/disaster_classification_with_distilbert | NandaKr | distilbert | 6 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 946 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disaster_classification_with_distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
| de8eb5d961580878664929f51059e925 |
no3/kat-wd-1.4-beta1 | no3 | null | 24 | 10 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,055 | false | ### a Better model is out, Go to https://huggingface.co/no3/kat-at3-beta1
### kat from [Flipon](https://store.steampowered.com/app/1285020/Flipon/) on [WD](https://huggingface.co/hakurei/waifu-diffusion) via Dreambooth
#### model by no3
This your waifu-diffusion v1.4 model fine-tuned kat concept taught to waifu-diffusion v1.4 with Dreambooth.
It can be used by modifying the `instance_prompt`: **sks_kaatt**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts).
### note
If you want to to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt files just download one or more file from here for your convenience.
[katFl-wd-1.4-beta1.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/katFl-wd-1.4-beta1.ckpt) 5.16 GB
[katFl-wd-1.4-beta1-pruned.ckpt](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/katFl-wd-1.4-beta1-pruned.ckpt) 2.58 GB Uses less storage space, but untested yet
If you have issues or questions feel free to visit the Community Tab and start discussion about it.
Here are images used for training this concept:
![image 1](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/1.png)
![image 2](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/2.png)
![image 3](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/3.png)
![image 4](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/1%20c.png)
![image 5](https://huggingface.co/no3/kat-wd-1.4-beta1/resolve/main/concept_images/2%20c.png) | 73dd378eec169a31a9ca3064a177b1f4 |
ali2066/finetuned_token_2e-05_16_02_2022-14_37_42 | ali2066 | distilbert | 13 | 10 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,787 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_37_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| 34e6b6a5f043d4d2e0066e4d1d74ae9e |
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_rte_256 | gokuls | mobilebert | 17 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,652 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_rte_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0847
- Accuracy: 0.4874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2703 | 1.0 | 1136 | 3.2768 | 0.4657 |
| 0.0555 | 2.0 | 2272 | 3.0847 | 0.4874 |
| 0.0253 | 3.0 | 3408 | 5.4968 | 0.5018 |
| 0.0149 | 4.0 | 4544 | 5.6020 | 0.4982 |
| 0.0104 | 5.0 | 5680 | 6.6683 | 0.5090 |
| 0.0082 | 6.0 | 6816 | 8.2220 | 0.5090 |
| 0.0062 | 7.0 | 7952 | 8.2179 | 0.5054 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 0850b2caf26c4f6c89c277133bf08f60 |
research-backup/bart-large-squadshifts-vanilla-amazon-qg | research-backup | bart | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qg_squadshifts'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question generation'] | true | true | true | 4,160 | false |
# Model Card of `research-backup/bart-large-squadshifts-vanilla-amazon-qg`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: amazon) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (amazon)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-large-squadshifts-vanilla-amazon-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-large-squadshifts-vanilla-amazon-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-amazon-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json)
| | Score | Type | Dataset |
|:-----------|--------:|:-------|:---------------------------------------------------------------------------|
| BERTScore | 92.3 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_1 | 28.19 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_2 | 18.89 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_3 | 12.92 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_4 | 9.1 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| METEOR | 23.04 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| MoverScore | 62.81 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| ROUGE_L | 27.85 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squadshifts
- dataset_name: amazon
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-large
- max_length: 512
- max_length_output: 32
- epoch: 4
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-amazon-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 39e0d5b9c616eda9a93b0bb0acb27cbf |
StonyBrookNLP/preasm-large-iirc-retrieved | StonyBrookNLP | t5 | 8 | 4 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering, multi-step-reasoning, multi-hop-reasoning'] | false | true | true | 2,613 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/preasm-large-iirc-retrieved"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"Who scored the first touchdown of the game?\n" +
"... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
``` | 7e04ee41d0de9dc4f640e537e9f6a06f |
jimypbr/bart-large-test | jimypbr | bart | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['cnn_dailymail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,118 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the cnn_dailymail 3.0.0 dataset.
## Model description
More information needed
## Intended uses & limitations
This is a work in progress. Please don't use these weights.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 2.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1fd7e07a988f7a97422decb8f6658506 |
hsohn3/mayo-bert-uncased-wordlevel-block512-ep10 | hsohn3 | bert | 8 | 6 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,446 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/mayo-bert-uncased-wordlevel-block512-ep10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3171
- Epoch: 9
## Model description
- base_model: bert-base-uncased
- block_size: 512
- tokenizer: ehr-bert-wordlevel-uncased
## Intended uses & limitations
More information needed
## Training and evaluation data
- MAYO visit-level texts
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
- mlm_probability: 0.15
- batch_size: 8
- epochs: 10
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.0885 | 0 |
| 2.8340 | 1 |
| 2.7975 | 2 |
| 2.6720 | 3 |
| 2.4868 | 4 |
| 2.1750 | 5 |
| 1.8143 | 6 |
| 1.0948 | 7 |
| 0.4915 | 8 |
| 0.3171 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2b4534d4c27acb738df1a91d81f2fb6e |
yohida/yoshida_gpt | yohida | gpt2 | 3 | 4 | transformers | 0 | text-generation | false | false | false | mit | ['ja'] | ['cc100', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['ja', 'japanese', 'gpt', 'text-generation', 'lm', 'nlp'] | false | true | true | 2,129 | false |
# japanese-gpt-1b
![rinna-icon](./rinna.png)
This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
~~~~
import torch
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b")
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b")
if torch.cuda.is_available():
model = model.to("cuda")
text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_length=100,
min_length=100,
do_sample=True,
top_k=500,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
bad_word_ids=[[tokenizer.unk_token_id]]
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
# sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの
~~~~
# Model architecture
A 24-layer, 2048-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT) | aab8d03c8ab67a5353f9a4c9e8b53808 |
sumups-ai/results | sumups-ai | bert | 13 | 3 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,686 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4578
- Precision: 0.0060
- Recall: 0.0286
- F1: 0.0099
- Accuracy: 0.4288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 8 | 1.6449 | 0.0 | 0.0 | 0.0 | 0.3860 |
| No log | 2.0 | 16 | 1.5439 | 0.0014 | 0.0071 | 0.0023 | 0.4025 |
| No log | 3.0 | 24 | 1.4986 | 0.0068 | 0.0286 | 0.0110 | 0.4176 |
| No log | 4.0 | 32 | 1.4603 | 0.0033 | 0.0143 | 0.0054 | 0.4285 |
| No log | 5.0 | 40 | 1.4578 | 0.0060 | 0.0286 | 0.0099 | 0.4288 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| ccede256c955ba798b4e5f079ea7f3c6 |
google/t5-efficient-tiny-nl8 | google | t5 | 12 | 7 | transformers | 5 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,245 | false |
# T5-Efficient-TINY-NL8 (Deep-Narrow version)
T5-Efficient-TINY-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-nl8** - is of model type **Tiny** with the following variations:
- **nl** is **8**
It has **22.93** million parameters and thus requires *ca.* **91.74 MB** of memory in full precision (*fp32*)
or **45.87 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | dd9bd5a4731bdf97d8da37985fa8a0d0 |
PaulChimzy/stt_rw_conformer_transducer_large | PaulChimzy | null | 4 | 1 | nemo | 0 | automatic-speech-recognition | true | false | false | cc-by-4.0 | ['rw'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'speech', 'Kinyarwanda', 'audio', 'CTC', 'Conformer', 'Transformer', 'NeMo', 'pytorch'] | true | true | true | 2,177 | false |
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("PaulChimzy/stt_rw_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="PaulChimzy/stt_rw_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) | adcb9f8d33809df66a385e34e9f7b56b |
mrm8488/wav2vec2-large-xlsr-53-breton | mrm8488 | wav2vec2 | 11 | 12 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['br'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 3,321 | false |
# Wav2Vec2-Large-XLSR-53-breton
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "br", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Breton test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "br", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-breton")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.49 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
| 7f537a076979c79c709f805378fce318 |
ProGamerGov/Object-Taped-To-Wall-Diffusion-V1 | ProGamerGov | null | 4 | 0 | null | 10 | text-to-image | false | false | false | creativeml-openrail-m | null | ['ProGamerGov/StableDiffusion-v1-5-Regularization-Images'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 1,723 | false |
**Object-Taped-To-Wall-Diffusion**
This fine-tuned Stable Diffusion v1.5 model was trained for 2000 iterations with a batch size of 4, on a selection of photos of things taped to a wall. Training was performed using [ShivamShrirao/diffusers](https://github.com/ShivamShrirao/diffusers) with full precision, prior-preservation loss, the train-text-encoder feature, and the new [1.5 MSE VAE from Stability AI](https://huggingface.co/stabilityai/sd-vae-ft-mse). A total of 2100 regularization / class images were used from [here](https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images). Regularization images were generated using the prompt "artwork style" with 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset.
Use the tokens **ttw style** in your prompts for the effect. Note that the effect also appears to occur at a much weaker strength on prompts that steer the output towards specific artistic styles.
This model will likely not perform well on taping objects that are not traditionally able to be taped to walls.
<div align="center">
<img src="https://huggingface.co/ProGamerGov/Object-Taped-To-Wall-Diffusion-V1/resolve/main/v1_size_512x512_t4x8.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/Object-Taped-To-Wall-Diffusion-V1/resolve/main/v1_size_512x512_t4x8.png)
Example images were generated with the v1 2000 iteration model using DPM++ 2S a Karras:
```
ttw style, <object> taped to wall
```
This model was inspired by the 2019 art piece [*Comedian* by Italian artist Maurizio Cattelan](https://en.wikipedia.org/wiki/Comedian_(artwork\)), where a banana was duct taped to a wall.
| f047438277b0962b700fab5585302a05 |
AalLaa/whisper-small-ar | AalLaa | whisper | 15 | 1 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,513 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ar
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8342
- Wer: 82.3706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6454 | 5.0 | 1000 | 1.8790 | 86.8695 |
| 0.0408 | 10.0 | 2000 | 2.4389 | 80.5579 |
| 0.0043 | 15.0 | 3000 | 2.7456 | 82.2767 |
| 0.002 | 20.0 | 4000 | 2.8342 | 82.3706 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 5332f36530855159f8b09e8222b0c23d |
jj-co/gtr-t5-base | jj-co | t5 | 16 | 3 | sentence-transformers | 1 | feature-extraction | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 1,721 | false |
# sentence-transformers/gtr-t5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.
This model was converted from the Tensorflow model [gtr-base-1](https://tfhub.dev/google/gtr/gtr-base/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-base model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/gtr-t5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-base)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899)
| 21917bbdb15a2a07885abf0d98d13dd8 |
muhtasham/bert-small-finer-longer | muhtasham | bert | 9 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,311 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finer-longer
This model is a fine-tuned version of [muhtasham/bert-small-finer](https://huggingface.co/muhtasham/bert-small-finer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.49 | 500 | 1.6683 |
| 1.5941 | 0.97 | 1000 | 1.6569 |
| 1.5941 | 1.46 | 1500 | 1.6436 |
| 1.5605 | 1.94 | 2000 | 1.6173 |
| 1.5605 | 2.43 | 2500 | 1.6073 |
| 1.5297 | 2.91 | 3000 | 1.6001 |
| 1.5297 | 3.4 | 3500 | 1.5815 |
| 1.5022 | 3.89 | 4000 | 1.5756 |
| 1.5022 | 4.37 | 4500 | 1.5568 |
| 1.4753 | 4.86 | 5000 | 1.5458 |
| 1.4753 | 5.34 | 5500 | 1.5399 |
| 1.4537 | 5.83 | 6000 | 1.5273 |
| 1.4537 | 6.32 | 6500 | 1.5192 |
| 1.433 | 6.8 | 7000 | 1.5099 |
| 1.433 | 7.29 | 7500 | 1.5083 |
| 1.4169 | 7.77 | 8000 | 1.4957 |
| 1.4169 | 8.26 | 8500 | 1.4914 |
| 1.3982 | 8.75 | 9000 | 1.4859 |
| 1.3982 | 9.23 | 9500 | 1.4697 |
| 1.3877 | 9.72 | 10000 | 1.4711 |
| 1.3877 | 10.2 | 10500 | 1.4608 |
| 1.3729 | 10.69 | 11000 | 1.4583 |
| 1.3729 | 11.18 | 11500 | 1.4513 |
| 1.3627 | 11.66 | 12000 | 1.4498 |
| 1.3627 | 12.15 | 12500 | 1.4396 |
| 1.357 | 12.63 | 13000 | 1.4415 |
| 1.357 | 13.12 | 13500 | 1.4347 |
| 1.3484 | 13.61 | 14000 | 1.4316 |
| 1.3484 | 14.09 | 14500 | 1.4319 |
| 1.3442 | 14.58 | 15000 | 1.4268 |
| 1.3442 | 15.06 | 15500 | 1.4293 |
| 1.3387 | 15.55 | 16000 | 1.4217 |
| 1.3387 | 16.03 | 16500 | 1.4241 |
| 1.3358 | 16.52 | 17000 | 1.4250 |
| 1.3358 | 17.01 | 17500 | 1.4196 |
| 1.3344 | 17.49 | 18000 | 1.4193 |
| 1.3344 | 17.98 | 18500 | 1.4200 |
| 1.3274 | 18.46 | 19000 | 1.4250 |
| 1.3274 | 18.95 | 19500 | 1.4168 |
| 1.3348 | 19.44 | 20000 | 1.4164 |
| 1.3348 | 19.92 | 20500 | 1.4264 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 56741ddcbdc9c0b0333f477801775efb |
timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k | timm | null | 4 | 158 | timm | 1 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k', 'imagenet-12k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 22,468 | false | # Model card for maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k
A timm specific MaxxViT-V2 (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman.
ImageNet-12k pretraining and ImageNet-1k fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances..
### Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 116.1
- GMACs: 24.2
- Activations (M): 62.8
- Image size: 224 x 224
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
| dc4034e4cc46f6c6cb733a1c295c4747 |
clementchadebec/reproduced_wrapped_poincare_vae | clementchadebec | null | 7 | 0 | pythae | 0 | null | false | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pythae', 'reproducibility'] | false | true | true | 747 | false |
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_wrapped_poincare_vae")
```
## Reproducibility
This trained model reproduces the results of the official implementation of [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| PoincareVAE | MNIST | NLL (500 IS) | 101.66 (0.00) | 101.47 (0.01) |
[1] Mathieu, E., Le Lan, C., Maddison, C. J., Tomioka, R., & Teh, Y. W. (2019). Continuous hierarchical representations with poincaré variational auto-encoders. Advances in neural information processing systems, 32. | 6cc36e1514dfcf3ffa7b87720eb2635b |
StatsGary/norweigen-fjords-fjords | StatsGary | null | 17 | 105 | diffusers | 48 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape'] | false | true | true | 4,203 | false |
# DreamBooth model for the norweigen-fjords concept trained by StatsGary on the StatsGary/dreambooth-hackathon-images dataset.
This is a Stable Diffusion model fine-tuned on the norweigen-fjords concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a viking on the fjords**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `fjords` images for the landscape theme. Below are some examples of the images generated on the back of the model:
### Lobster swimming in a Fjord
The below example uses a prompt similar to *lobster swimming in a fjord* to generate the output:
![lobster.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673018851087-63b83d10e60862785afef49f.jpeg)
### Viking warrior in a Fjord
This represents a generated Viking warrior on or near a Fjord. The prompt used to generate is **prompt**=*a viking warrior on a fjord*:
![viking_on_fjord.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673019199954-627cebc6cecd686d4cd7411c.jpeg)
### A yellow submarine (inspired by The Beetles)
Here, we see a yellow submarine inspired by the popular Beetles album. The prompt used to generate is **prompt**=a beetles like yellow submarines on a fjord*:
![Beetles_submarine.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673019576047-627cebc6cecd686d4cd7411c.jpeg)
### A cruise ship on a fjord
This is based on the **prompt**=*a cruise ship on a fjord*:
![6bd7a6b7-9716-478e-81ea-7f58b59707e8.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1673271806453-627cebc6cecd686d4cd7411c.jpeg)
### Taj Mahal on a Fjord
This generates landmarks near or on the fjord:
![68dd6b17-bb8c-45e7-bfe6-79442f633121.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674057958178-627cebc6cecd686d4cd7411c.jpeg)
### Watersports on a Fjord
This is an example of a kayaker on a fjord - generated using *prompt*="a kayaker on a fjord":
![1e730131-63c4-4095-9f36-61e8659c946a.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058117373-627cebc6cecd686d4cd7411c.jpeg)
What about a surfer on a fjord:
![surfer.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058620579-627cebc6cecd686d4cd7411c.jpeg)
### Godzilla wading through a Fjord
This one is a generated image of Godzilla wading through a Fjord:
![45618490-f4d3-44e4-ac8b-a0375b983576.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058731220-627cebc6cecd686d4cd7411c.jpeg)
### How about T-Rex
On the theme of Godzilla, what about T-Rex:
![eef051e5-267b-426e-97a1-fbd947185dba.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674058942184-627cebc6cecd686d4cd7411c.jpeg)
## Paintings on a Fjord
We could explore what a **Da Vinci** type painting would look like on a Fjord:
![davinci.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674664480840-627cebc6cecd686d4cd7411c.jpeg)
## A pet rabbit on a Fjord
What about your pet rabit:
![rabbit.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674664175251-627cebc6cecd686d4cd7411c.jpeg)
## Pop Art of a Fjord
This is a pop art of a Fjord:
![pop_art.jpeg](https://s3.amazonaws.com/moonup/production/uploads/1674664922159-627cebc6cecd686d4cd7411c.jpeg).
## Generating your own predictions
The following Python code will allow you to get up and running quickly, just replace the *prompt* field for your own generation, wait for HuggingFace to compute and you should have your own Stable Diffusion object generated against a backdrop of the fjords. Idyllic!
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('StatsGary/norweigen-fjords-fjords')
image = pipeline(prompt='a viking on a fjord').images[0]
image
```
## Supporting article(s)
I have undertaken a blog to explain this:
- Fjord stable diffusion model: https://hutsons-hacks.info/stable-diffusion-model-for-generating-images-of-fjords
- Stable diffusion application with Streamlit: https://hutsons-hacks.info/stable-diffusion-application-with-streamlit
| a5ec6b545fddd10cfb6de185d1e721ca |
richardcai/distilbert-base-uncased-finetuned-emotion | richardcai | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8401 | 1.0 | 250 | 0.3144 | 0.9085 | 0.9058 |
| 0.2524 | 2.0 | 500 | 0.2133 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| d7910235313942a395f38529b8ea020d |
caiochacon/t5-small-finetuned-xsum | caiochacon | t5 | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 866 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
| 3f35c81a561841e6a8e2b7b3a6fd5b4b |
GioReg/bertdbmdzIhate | GioReg | bert | 13 | 4 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,049 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertdbmdzIhate
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6880
- Accuracy: 0.726
- F1: 0.4170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 70ad3d12752140dba28f85217b7436c8 |
fahadtouseef/wav2vec2-base-timit-demo-colab_2 | fahadtouseef | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,703 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3801
- Wer: 0.3035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7227 | 3.52 | 500 | 2.6961 | 1.0 |
| 1.1237 | 7.04 | 1000 | 0.6088 | 0.5315 |
| 0.4886 | 10.56 | 1500 | 0.4709 | 0.4353 |
| 0.3148 | 14.08 | 2000 | 0.4341 | 0.3942 |
| 0.2229 | 17.61 | 2500 | 0.4035 | 0.3616 |
| 0.1693 | 21.13 | 3000 | 0.3868 | 0.3289 |
| 0.1393 | 24.65 | 3500 | 0.3993 | 0.3135 |
| 0.118 | 28.17 | 4000 | 0.3801 | 0.3035 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 03dd21f62567a483d1a80352617d97e8 |
hieule/distilbert-base-uncased-scratch | hieule | distilbert | 13 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,311 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-scratch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.388 | 1.0 | 157 | 7.3651 |
| 6.9902 | 2.0 | 314 | 6.7300 |
| 6.659 | 3.0 | 471 | 6.6304 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 9530d31aa15dc986587850516019bb68 |
lvargas/distilbert-base-uncased-finetuned-emotion2 | lvargas | distilbert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,347 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- Accuracy: 0.903
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5960 | 0.8025 | 0.7750 |
| 0.7853 | 2.0 | 250 | 0.3623 | 0.903 | 0.9003 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
| 9008e41f8320c8d9d2ef65031d318df4 |
ImageIN/convnext-base-224_finetuned_on_ImageIn_annotations | ImageIN | convnext | 7 | 8 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 5,933 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-base-224_finetuned_on_ImageIn_annotations
This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0749
- Precision: 0.9722
- Recall: 0.9811
- F1: 0.9765
- Accuracy: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 83 | 0.1368 | 0.9748 | 0.9632 | 0.9688 | 0.9772 |
| No log | 2.0 | 166 | 0.0734 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| No log | 3.0 | 249 | 0.0693 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| No log | 4.0 | 332 | 0.0698 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| No log | 5.0 | 415 | 0.0688 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| No log | 6.0 | 498 | 0.0690 | 0.9729 | 0.9751 | 0.9740 | 0.9807 |
| 0.0947 | 7.0 | 581 | 0.0666 | 0.9689 | 0.9800 | 0.9743 | 0.9807 |
| 0.0947 | 8.0 | 664 | 0.0642 | 0.9689 | 0.9800 | 0.9743 | 0.9807 |
| 0.0947 | 9.0 | 747 | 0.0790 | 0.9763 | 0.9763 | 0.9763 | 0.9824 |
| 0.0947 | 10.0 | 830 | 0.0813 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| 0.0947 | 11.0 | 913 | 0.0797 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| 0.0947 | 12.0 | 996 | 0.0791 | 0.9763 | 0.9763 | 0.9763 | 0.9824 |
| 0.0205 | 13.0 | 1079 | 0.0871 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| 0.0205 | 14.0 | 1162 | 0.0716 | 0.9722 | 0.9811 | 0.9765 | 0.9824 |
| 0.0205 | 15.0 | 1245 | 0.0746 | 0.9776 | 0.9799 | 0.9787 | 0.9842 |
| 0.0205 | 16.0 | 1328 | 0.0917 | 0.9738 | 0.9692 | 0.9714 | 0.9789 |
| 0.0205 | 17.0 | 1411 | 0.0694 | 0.9776 | 0.9799 | 0.9787 | 0.9842 |
| 0.0205 | 18.0 | 1494 | 0.0697 | 0.9768 | 0.9859 | 0.9812 | 0.9859 |
| 0.0166 | 19.0 | 1577 | 0.0689 | 0.9702 | 0.9835 | 0.9766 | 0.9824 |
| 0.0166 | 20.0 | 1660 | 0.0995 | 0.9738 | 0.9692 | 0.9714 | 0.9789 |
| 0.0166 | 21.0 | 1743 | 0.0847 | 0.9776 | 0.9799 | 0.9787 | 0.9842 |
| 0.0166 | 22.0 | 1826 | 0.0843 | 0.9776 | 0.9799 | 0.9787 | 0.9842 |
| 0.0166 | 23.0 | 1909 | 0.0869 | 0.9750 | 0.9727 | 0.9739 | 0.9807 |
| 0.0166 | 24.0 | 1992 | 0.0762 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0125 | 25.0 | 2075 | 0.0778 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0125 | 26.0 | 2158 | 0.0834 | 0.9763 | 0.9763 | 0.9763 | 0.9824 |
| 0.0125 | 27.0 | 2241 | 0.0818 | 0.9776 | 0.9799 | 0.9787 | 0.9842 |
| 0.0125 | 28.0 | 2324 | 0.0756 | 0.9684 | 0.9859 | 0.9768 | 0.9824 |
| 0.0125 | 29.0 | 2407 | 0.1150 | 0.9591 | 0.9824 | 0.9700 | 0.9772 |
| 0.0125 | 30.0 | 2490 | 0.0781 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0111 | 31.0 | 2573 | 0.0793 | 0.9716 | 0.9871 | 0.9790 | 0.9842 |
| 0.0111 | 32.0 | 2656 | 0.0713 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0111 | 33.0 | 2739 | 0.0802 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0111 | 34.0 | 2822 | 0.0636 | 0.9802 | 0.9870 | 0.9835 | 0.9877 |
| 0.0111 | 35.0 | 2905 | 0.0702 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0111 | 36.0 | 2988 | 0.0773 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0145 | 37.0 | 3071 | 0.0663 | 0.9781 | 0.9894 | 0.9836 | 0.9877 |
| 0.0145 | 38.0 | 3154 | 0.0721 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0145 | 39.0 | 3237 | 0.0708 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0145 | 40.0 | 3320 | 0.0729 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0145 | 41.0 | 3403 | 0.0760 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0145 | 42.0 | 3486 | 0.0771 | 0.9716 | 0.9871 | 0.9790 | 0.9842 |
| 0.0106 | 43.0 | 3569 | 0.0713 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0106 | 44.0 | 3652 | 0.0721 | 0.9748 | 0.9883 | 0.9813 | 0.9859 |
| 0.0106 | 45.0 | 3735 | 0.0732 | 0.9768 | 0.9859 | 0.9812 | 0.9859 |
| 0.0106 | 46.0 | 3818 | 0.0783 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0106 | 47.0 | 3901 | 0.0770 | 0.9789 | 0.9835 | 0.9811 | 0.9859 |
| 0.0106 | 48.0 | 3984 | 0.0744 | 0.9735 | 0.9847 | 0.9789 | 0.9842 |
| 0.0082 | 49.0 | 4067 | 0.0752 | 0.9722 | 0.9811 | 0.9765 | 0.9824 |
| 0.0082 | 50.0 | 4150 | 0.0749 | 0.9722 | 0.9811 | 0.9765 | 0.9824 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| d10f53d1d396eef2a5fa871bd0ad3c7a |
jamiehuang12/t5-small-finetuned-xsum | jamiehuang12 | t5 | 12 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['eli5'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,417 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6744
- Rouge1: 13.2843
- Rouge2: 2.006
- Rougel: 10.6541
- Rougelsum: 12.0343
- Gen Len: 18.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.8822 | 1.0 | 17040 | 3.6744 | 13.2843 | 2.006 | 10.6541 | 12.0343 | 18.9984 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| 256ba259668b55812d4b51a31753754a |
debbiesoon/summarise_v10 | debbiesoon | led | 15 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 8,267 | false |
![SGH logo.png](https://s3.amazonaws.com/moonup/production/uploads/1667143139655-631feef1124782a19eff4243.png)
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the SGH news articles and summaries dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9680
- Rouge1 Precision: 0.4404
- Rouge1 Recall: 0.5874
- Rouge1 Fmeasure: 0.4653
- Rouge2 Precision: 0.2673
- Rouge2 Recall: 0.3871
- Rouge2 Fmeasure: 0.2897
- Rougel Precision: 0.3059
- Rougel Recall: 0.4418
- Rougel Fmeasure: 0.3308
- Rougelsum Precision: 0.3059
- Rougelsum Recall: 0.4418
- Rougelsum Fmeasure: 0.3308
## Model description
This model was created to generate summaries of news articles.
## Intended uses & limitations
The model takes up to maximum article length of 3072 tokens and generates a summary of maximum length of 512 tokens, and minimum length of 100 tokens.
## Training and evaluation data
This model was trained on 100+ articles and summaries from SGH.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|
| 1.4834 | 0.43 | 10 | 1.7001 | 0.2304 | 0.6761 | 0.3152 | 0.1326 | 0.4034 | 0.1797 | 0.1495 | 0.4624 | 0.2069 | 0.1495 | 0.4624 | 0.2069 |
| 1.5011 | 0.87 | 20 | 1.6051 | 0.4301 | 0.5372 | 0.4087 | 0.2481 | 0.3439 | 0.245 | 0.2878 | 0.3928 | 0.2834 | 0.2878 | 0.3928 | 0.2834 |
| 0.9289 | 1.3 | 30 | 1.5501 | 0.431 | 0.597 | 0.4364 | 0.2653 | 0.393 | 0.2736 | 0.3007 | 0.4233 | 0.3037 | 0.3007 | 0.4233 | 0.3037 |
| 1.0895 | 1.74 | 40 | 1.5969 | 0.4661 | 0.5481 | 0.4486 | 0.2736 | 0.3439 | 0.2689 | 0.3318 | 0.4045 | 0.3221 | 0.3318 | 0.4045 | 0.3221 |
| 0.7785 | 2.17 | 50 | 1.5875 | 0.4527 | 0.5405 | 0.4209 | 0.2942 | 0.3634 | 0.272 | 0.3268 | 0.4047 | 0.3042 | 0.3268 | 0.4047 | 0.3042 |
| 0.635 | 2.61 | 60 | 1.6081 | 0.4142 | 0.5649 | 0.4172 | 0.242 | 0.3659 | 0.2549 | 0.2787 | 0.4156 | 0.2909 | 0.2787 | 0.4156 | 0.2909 |
| 0.514 | 3.04 | 70 | 1.6150 | 0.4431 | 0.5665 | 0.4569 | 0.2656 | 0.3754 | 0.2853 | 0.3252 | 0.441 | 0.3434 | 0.3252 | 0.441 | 0.3434 |
| 0.5617 | 3.48 | 80 | 1.6447 | 0.3956 | 0.6304 | 0.451 | 0.2353 | 0.425 | 0.2776 | 0.2883 | 0.4904 | 0.3332 | 0.2883 | 0.4904 | 0.3332 |
| 0.396 | 3.91 | 90 | 1.7423 | 0.4276 | 0.609 | 0.4506 | 0.2657 | 0.4142 | 0.2858 | 0.3091 | 0.4677 | 0.3316 | 0.3091 | 0.4677 | 0.3316 |
| 0.3427 | 4.35 | 100 | 1.7572 | 0.3877 | 0.5633 | 0.4169 | 0.216 | 0.3635 | 0.2468 | 0.2706 | 0.4314 | 0.3018 | 0.2706 | 0.4314 | 0.3018 |
| 0.3059 | 4.78 | 110 | 1.7705 | 0.4255 | 0.5524 | 0.4429 | 0.2495 | 0.3488 | 0.2671 | 0.3184 | 0.4275 | 0.3358 | 0.3184 | 0.4275 | 0.3358 |
| 0.2083 | 5.22 | 120 | 1.7840 | 0.4533 | 0.5896 | 0.4655 | 0.284 | 0.4142 | 0.308 | 0.3164 | 0.4442 | 0.3376 | 0.3164 | 0.4442 | 0.3376 |
| 0.2591 | 5.65 | 130 | 1.8396 | 0.4391 | 0.5315 | 0.4209 | 0.2768 | 0.3661 | 0.2707 | 0.3194 | 0.4124 | 0.3111 | 0.3194 | 0.4124 | 0.3111 |
| 0.2609 | 6.09 | 140 | 1.8220 | 0.4425 | 0.5712 | 0.4465 | 0.2642 | 0.3738 | 0.2727 | 0.3093 | 0.4349 | 0.3208 | 0.3093 | 0.4349 | 0.3208 |
| 0.1696 | 6.52 | 150 | 1.8916 | 0.475 | 0.5557 | 0.4686 | 0.2959 | 0.3783 | 0.3019 | 0.3409 | 0.4268 | 0.3442 | 0.3409 | 0.4268 | 0.3442 |
| 0.2683 | 6.96 | 160 | 1.8957 | 0.445 | 0.5918 | 0.4748 | 0.285 | 0.4021 | 0.3075 | 0.3249 | 0.4551 | 0.3522 | 0.3249 | 0.4551 | 0.3522 |
| 0.1259 | 7.39 | 170 | 1.9371 | 0.4473 | 0.5368 | 0.4664 | 0.2608 | 0.3355 | 0.282 | 0.3276 | 0.4071 | 0.3492 | 0.3276 | 0.4071 | 0.3492 |
| 0.1919 | 7.83 | 180 | 1.9521 | 0.4026 | 0.5528 | 0.438 | 0.2362 | 0.3427 | 0.2604 | 0.2751 | 0.3957 | 0.3042 | 0.2751 | 0.3957 | 0.3042 |
| 0.1279 | 8.26 | 190 | 1.9398 | 0.413 | 0.6053 | 0.4575 | 0.2511 | 0.403 | 0.2881 | 0.2662 | 0.4195 | 0.3027 | 0.2662 | 0.4195 | 0.3027 |
| 0.1176 | 8.7 | 200 | 1.9556 | 0.4363 | 0.565 | 0.4492 | 0.2591 | 0.3727 | 0.2806 | 0.3107 | 0.428 | 0.3289 | 0.3107 | 0.428 | 0.3289 |
| 0.1299 | 9.13 | 210 | 1.9642 | 0.4385 | 0.5728 | 0.4587 | 0.2687 | 0.3744 | 0.2888 | 0.3212 | 0.436 | 0.3404 | 0.3212 | 0.436 | 0.3404 |
| 0.1303 | 9.57 | 220 | 1.9649 | 0.43 | 0.5648 | 0.439 | 0.2605 | 0.3624 | 0.2691 | 0.2958 | 0.4135 | 0.3067 | 0.2958 | 0.4135 | 0.3067 |
| 0.1129 | 10.0 | 230 | 1.9680 | 0.4404 | 0.5874 | 0.4653 | 0.2673 | 0.3871 | 0.2897 | 0.3059 | 0.4418 | 0.3308 | 0.3059 | 0.4418 | 0.3308 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
| 2379cea9f5d9e4290a8aa224dd65e977 |
stevemobs/deberta-base-finetuned-squad1 | stevemobs | deberta | 17 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,220 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad1
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7928 | 1.0 | 7380 | 0.7810 |
| 0.5795 | 2.0 | 14760 | 0.8037 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| d6a4c596bf75da5d2150155d7a2a3574 |
andresca94/t5-small-finetuned-en-es | andresca94 | t5 | 12 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,418 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-es
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8937
- Rouge1: 32.6939
- Rouge2: 11.794
- Rougel: 31.9982
- Rougelsum: 31.9902
- Gen Len: 15.7947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.251 | 1.0 | 7061 | 1.8937 | 32.6939 | 11.794 | 31.9982 | 31.9902 | 15.7947 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| c7d5e450ef6eadde4d7d07f2f3a2bf0a |
microsoft/xdoc-base-funsd | microsoft | roberta | 8 | 22 | transformers | 1 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 756 | false |
# XDoc
## Introduction
XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment.
[XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849)
Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](#)
## Citation
If you find XDoc helpful, please cite us:
```
@article{chen2022xdoc,
title={XDoc: Unified Pre-training for Cross-Format Document Understanding},
author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu},
journal={arXiv preprint arXiv:2210.02849},
year={2022}
}
```
| 2da875ff9db69f2ee71df4e071ff5071 |
jonatasgrosman/exp_w2v2t_fa_vp-sv_s689 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fa'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fa'] | false | true | true | 469 | false | # exp_w2v2t_fa_vp-sv_s689
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| f7eb6e7f97ae2fef8cba587747daebfe |
sd-concepts-library/collage3 | sd-concepts-library | null | 29 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,116 | false | ### Collage3 on Stable Diffusion
This is the `<Collage3>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<Collage3> 0](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/19.jpeg)
![<Collage3> 1](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/5.jpeg)
![<Collage3> 2](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/6.jpeg)
![<Collage3> 3](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/15.jpeg)
![<Collage3> 4](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/20.jpeg)
![<Collage3> 5](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/14.jpeg)
![<Collage3> 6](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/9.jpeg)
![<Collage3> 7](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/3.jpeg)
![<Collage3> 8](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/0.jpeg)
![<Collage3> 9](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/17.jpeg)
![<Collage3> 10](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/12.jpeg)
![<Collage3> 11](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/13.jpeg)
![<Collage3> 12](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/2.jpeg)
![<Collage3> 13](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/16.jpeg)
![<Collage3> 14](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/18.jpeg)
![<Collage3> 15](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/22.jpeg)
![<Collage3> 16](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/10.jpeg)
![<Collage3> 17](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/7.jpeg)
![<Collage3> 18](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/1.jpeg)
![<Collage3> 19](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/21.jpeg)
![<Collage3> 20](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/23.jpeg)
![<Collage3> 21](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/11.jpeg)
![<Collage3> 22](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/4.jpeg)
![<Collage3> 23](https://huggingface.co/sd-concepts-library/collage3/resolve/main/concept_images/8.jpeg)
| e2a01c7583e39eeaabcfba1d168c2e5e |
thegoodfellas/tgf-xlm-roberta-base-pt-br | thegoodfellas | xlm-roberta | 9 | 4 | transformers | 1 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | true | true | true | 1,821 | false |
# tgf-xlm-roberta-base-pt-br
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [BrWac](https://huggingface.co/datasets/thegoodfellas/brwac_tiny) dataset.
## Model description
This is a fine-tuned version of the Brazilian Portuguese language. It was trained using the [BrWac](https://huggingface.co/datasets/thegoodfellas/brwac_tiny) dataset and followed the principles from [Roberta's paper](https://arxiv.org/abs/1907.11692). The key strategies are:
1. *Full-Sentences*: Quoted from the paper: "Each input is packed with full sentences sampled contiguously from one or more documents, such that the total length is at most 512 tokens. Inputs may cross document boundaries. When we reach the end of one document, we begin sampling sentences from the next document and add an extra separator token between documents".
2. Tunned hyperparameters: adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-6 (as paper suggests)
## Availability
The source code is available [here](https://github.com/the-good-fellas/xlm-roberta-pt-br)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.6.1
- Tokenizers 0.13.1
### Environment
4xA100.88V NVIDIA
Special thanks to [DataCrunch.io](https://datacrunch.io) with their amazing, and affordable GPUs.
<img src="https://datacrunch.io/_next/static/media/Logo.6b773500.svg" width="20%"/>
| 6a71ffa428854819a5b165855d25d8be |
theojolliffe/bart-paraphrase-v4-e1-feedback-e4 | theojolliffe | bart | 12 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,758 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v4-e1-feedback-e4
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9640
- Rouge1: 61.6305
- Rouge2: 41.9892
- Rougel: 57.0694
- Rougelsum: 58.3816
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 34 | 2.8512 | 67.5001 | 46.2823 | 62.2247 | 63.3811 | 18.875 |
| No log | 2.0 | 68 | 2.3116 | 62.1089 | 43.432 | 57.564 | 58.8003 | 19.0 |
| No log | 3.0 | 102 | 2.0519 | 61.2025 | 40.9901 | 56.3369 | 57.5829 | 19.0 |
| No log | 4.0 | 136 | 1.9640 | 61.6305 | 41.9892 | 57.0694 | 58.3816 | 19.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.10.3
| c8e0a45b679d0eec4326430271ddbf96 |
sd-concepts-library/valorantstyle | sd-concepts-library | null | 10 | 0 | null | 17 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,142 | false | ### valorantstyle on Stable Diffusion
This is the `<valorant>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<valorant> 0](https://huggingface.co/sd-concepts-library/valorantstyle/resolve/main/concept_images/3.jpeg)
![<valorant> 1](https://huggingface.co/sd-concepts-library/valorantstyle/resolve/main/concept_images/0.jpeg)
![<valorant> 2](https://huggingface.co/sd-concepts-library/valorantstyle/resolve/main/concept_images/1.jpeg)
![<valorant> 3](https://huggingface.co/sd-concepts-library/valorantstyle/resolve/main/concept_images/2.jpeg)
![<valorant> 4](https://huggingface.co/sd-concepts-library/valorantstyle/resolve/main/concept_images/4.jpeg)
| 43cca3824ab5fe1652244a1fb3d5371b |
AndrewMcDowell/wav2vec2-xls-r-300m-japanese | AndrewMcDowell | wav2vec2 | 36 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ja'] | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'ja', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | true | true | true | 2,877 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
Kanji are converted into Hiragana using the [pykakasi](https://pykakasi.readthedocs.io/en/latest/index.html) library during training and evaluation. The model can output both Hiragana and Katakana characters. Since there is no spacing, WER is not a suitable metric for evaluating performance and CER is more suitable.
On mozilla-foundation/common_voice_8_0 it achieved:
- cer: 23.64%
On speech-recognition-community-v2/dev_data it achieved:
- cer: 30.99%
It achieves the following results on the evaluation set:
- Loss: 0.5212
- Wer: 1.3068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.0974 | 4.72 | 1000 | 4.0178 | 1.9535 |
| 2.1276 | 9.43 | 2000 | 0.9301 | 1.2128 |
| 1.7622 | 14.15 | 3000 | 0.7103 | 1.5527 |
| 1.6397 | 18.87 | 4000 | 0.6729 | 1.4269 |
| 1.5468 | 23.58 | 5000 | 0.6087 | 1.2497 |
| 1.4885 | 28.3 | 6000 | 0.5786 | 1.3222 |
| 1.451 | 33.02 | 7000 | 0.5726 | 1.3768 |
| 1.3912 | 37.74 | 8000 | 0.5518 | 1.2497 |
| 1.3617 | 42.45 | 9000 | 0.5352 | 1.2694 |
| 1.3113 | 47.17 | 10000 | 0.5228 | 1.2781 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
``` | 745c5a1e5cd5e05c7c1d644052b28c61 |
cahya/wav2vec2-large-xlsr-turkish | cahya | wav2vec2 | 9 | 8 | transformers | 1 | automatic-speech-recognition | true | false | true | apache-2.0 | ['tr'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 3,465 | false |
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Large-XLSR-Turkish, a fine-tuned
[facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model = Wav2Vec2ForCTC.from_pretrained("cahya-wirawan/wav2vec2-large-xlsr-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.13 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
| dbac93eef026a35955d84360fd4ea9d6 |
Rajesh222/distilbert-base-uncased-finetuned-emotion | Rajesh222 | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,338 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8085 | 1.0 | 250 | 0.3033 | 0.9065 | 0.9037 |
| 0.2458 | 2.0 | 500 | 0.2133 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.0
- Tokenizers 0.11.6
| 3b5d7445c24dbfc23f3be7051ed670ca |
tyoc213/wav2vec2-large-xlsr-nahuatl | tyoc213 | wav2vec2 | 9 | 3,979 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['nah specifically ncj'] | ['created a new dataset based on https://www.openslr.org/92/'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | true | true | 3,633 | false |
# Wav2Vec2-Large-XLSR-53-ncj/nah
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Nahuatl specifically of the Nort of Puebla (ncj) using a derivate of [SLR92](https://www.openslr.org/92/), and some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice).
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") # TODO: publish nahuatl_slr92_by_sentence
processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Nahuatl specifically of the Nort of Puebla (ncj) test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "{lang_id}", split="test") # TODO: publish nahuatl_slr92_by_sentence
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\"\“\%\‘\”\�\(\)\-]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 50.95 %
## Training
A derivate of [SLR92](https://www.openslr.org/92/) to be published soon.And some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice)
The script used for training can be found [less60wer.ipynb](./less60wer.ipynb)
| 443b55a89b3fbc63507871d946206ef6 |
amartyobanerjee/xlm-roberta-base-finetuned-panx-en | amartyobanerjee | xlm-roberta | 10 | 0 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3927
- F1: 0.6863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.505 | 2.0 | 100 | 0.4627 | 0.6393 |
| 0.3783 | 3.0 | 150 | 0.3927 | 0.6863 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| d89bc4104cb3dc0fe8d65e6582a6ccc0 |
nasuka/deberta-v2-base-japanese-finetuned-emotion | nasuka | deberta-v2 | 10 | 4 | transformers | 0 | text-classification | true | false | false | cc-by-sa-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,356 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v2-base-japanese-finetuned-emotion
This model is a fine-tuned version of [ku-nlp/deberta-v2-base-japanese](https://huggingface.co/ku-nlp/deberta-v2-base-japanese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0465
- Accuracy: 0.9921
- F1: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0493 | 1.0 | 806 | 0.0273 | 0.9940 | 0.9940 |
| 0.0106 | 2.0 | 1612 | 0.0465 | 0.9921 | 0.9921 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0+cu113
- Datasets 2.8.0
- Tokenizers 0.13.2
| 6e1040e401248611d93faa7271dcff0b |
IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese | IDEA-CCNL | null | 6 | 0 | transformers | 0 | null | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['ZEN', 'chinese'] | false | true | true | 5,077 | false | # Erlangshen-ZEN2-345M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理NLU任务,使用了N-gram编码增强文本语义,3.45亿参数量的ZEN2
ZEN2 model, which uses N-gram to enhance text semantic and has 345M parameters, is adept at NLU tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN2 | 345M | 中文-Chinese |
## 模型信息 Model Information
我们与[ZEN团队](https://github.com/sinovation/ZEN)合作,使用我们的封神框架,开源发布了ZEN2模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN2使用大规模数据集和特殊的预训练策略对N-gram增强编码器进行预训练。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能。
We open source and publicly release ZEN2 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN2 pre-trains the N-gram-enhanced encoders with large-scale datasets and special pre-training strategies. In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.
### 下游效果 Performance
**分类任务 Classification**
| Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
| :--------: | :-----: | :----: | :-----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
| Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
**抽取任务 Extraction**
| Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN2相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of ZEN2 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN2 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
```python
from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenForSequenceClassification, ZenForTokenClassification
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
你可以从下方的链接获得我们做分类和抽取的详细示例。
You can get classification and extraction examples below.
[分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/fs_zen2_base_tnews.sh)
[抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@article{Sinovation2021ZEN2,
title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
journal={arXiv preprint arXiv:2105.01279},
year={2021},
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | b9ba5af0c12a85cc67460bd85c4f153d |
abigailp/vacc | abigailp | bert | 77 | 22 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,045 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vacc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8424
- Accuracy: 0.8793
- F1: 0.9176
- Recall: 0.975
- Precision: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 0169ec11a166ad2c27fa8db4886fcf21 |
Conflictx/CandyPunk | Conflictx | null | 4 | 0 | null | 30 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'v2.0', 'Embedding'] | false | true | true | 1,360 | false |
Textual Inversion Embedding by ConflictX For SD 2.0 trained on 768x768 images from midjourney and other sources.
Install by downloading the step embedding, and put it in the \embeddings folder
Another themed one, this one is more focused on vibrant and sweet environments.
Use keyword: CandyPunk
Images:
![00002-149071020-cute room of ocean bottom ,candypunk style.png](https://s3.amazonaws.com/moonup/production/uploads/1670100139191-6303c53d7373aacccd859bbd.png)
![00003-1792127834-cute room of refinery ,candypunk style.png](https://s3.amazonaws.com/moonup/production/uploads/1670100152329-6303c53d7373aacccd859bbd.png)
![00000-3163316236-furious adult woman in a cute room,candypunk style.png](https://s3.amazonaws.com/moonup/production/uploads/1670100158070-6303c53d7373aacccd859bbd.png)
![00001-4197392007-attracted 20 year old man in a cute room,candypunk style.png](https://s3.amazonaws.com/moonup/production/uploads/1670100163583-6303c53d7373aacccd859bbd.png)
![00007-3708365902-cute fluffy dragon on a table ,candypunk style, lovely serene lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1670100309746-6303c53d7373aacccd859bbd.png)
![00006-3014347479-cute fluffy parrot on a table ,candypunk style, lovely serene lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1670100316313-6303c53d7373aacccd859bbd.png)
| 8b77d6df3b34030c0e75bc80b5f67111 |
khasrul-alam/banglabert-finetuned-squad | khasrul-alam | distilbert | 15 | 4 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,016 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# khasrul-alam/banglabert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.8513
- Train End Logits Accuracy: 0.0
- Train Start Logits Accuracy: 0.0
- Validation Loss: 5.8678
- Validation End Logits Accuracy: 0.0
- Validation Start Logits Accuracy: 0.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 5.9297 | 0.0 | 0.0208 | 5.9075 | 0.0 | 0.0 | 0 |
| 5.8513 | 0.0 | 0.0 | 5.8678 | 0.0 | 0.0 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
| 8bb2d704239f0cb2c5664a9769f30264 |