repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
espnet/kan-bayashi_vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_-truncated-69081b | espnet | null | 19 | 2 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['en'] | ['vctk'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,868 | false | ## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036264/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 4899cfcf1302b7df734cd4d23c5582b5 |
jonatasgrosman/exp_w2v2t_et_unispeech_s605 | jonatasgrosman | unispeech | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['et'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'et'] | false | true | true | 469 | false | # exp_w2v2t_et_unispeech_s605
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 2c04c4b3fc72a1f7e1280d8b1fbf70b8 |
id2223lab1/whisper-small | id2223lab1 | whisper | 15 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sv'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,004 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sv - Swedish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 5ea259c3678ebc0a97c56a339651fbff |
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | CAMeL-Lab | bert | 10 | 790 | transformers | 2 | text-classification | true | true | false | apache-2.0 | ['ar'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,276 | false | # CAMeLBERT Mix SA Model
## Model description
**CAMeLBERT Mix SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT Mix SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> sa = pipeline('sentiment-analysis', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | 517a929a3bb5c286d3aa0321a8c9dade |
TransQuest/monotransquest-da-any_en | TransQuest | xlm-roberta | 8 | 30 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['multilingual-en'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['Quality Estimation', 'monotransquest', 'DA'] | false | true | true | 5,304 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-any_en", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| 48c214e0d78fa9789b79a67aba1ca896 |
tftransformers/gpt2-medium | tftransformers | null | 6 | 2 | null | 0 | null | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 5,380 | false |
# GPT-2
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from tf_transformers.models import GPT2Model
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained("gpt2-medium")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
outputs_tf = model(inputs_tf)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> | b22c73c4d7cae1cc8323f465b146fed8 |
Bhumika-kumaran/t5-small-finetuned-xsum | Bhumika-kumaran | t5 | 13 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['xsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,418 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.2786
- Rouge2: 7.6957
- Rougel: 22.1976
- Rougelsum: 22.2034
- Gen Len: 18.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7189 | 1.0 | 12753 | 2.4789 | 28.2786 | 7.6957 | 22.1976 | 22.2034 | 18.8238 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 6529807d24f27e1f36f201e38771df35 |
qBob/t5-small_corrector_15 | qBob | bart | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,827 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_corrector_15
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3416
- Rouge1: 34.7998
- Rouge2: 9.0842
- Rougel: 27.8188
- Rougelsum: 27.839
- Gen Len: 18.5561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 4.2274 | 1.0 | 2365 | 2.9386 | 10.1244 | 1.0024 | 9.1029 | 9.1104 | 18.5377 |
| 2.7936 | 2.0 | 4730 | 2.0196 | 17.7168 | 3.0899 | 15.1305 | 15.1353 | 18.8883 |
| 2.2678 | 3.0 | 7095 | 1.7072 | 26.8501 | 5.7804 | 22.0034 | 22.0213 | 18.839 |
| 1.9029 | 4.0 | 9460 | 1.5254 | 32.9484 | 7.8531 | 26.4538 | 26.4749 | 18.502 |
| 1.5936 | 5.0 | 11825 | 1.3416 | 34.7998 | 9.0842 | 27.8188 | 27.839 | 18.5561 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 537dc6c065f1123323969c6c4e0d9957 |
saltacc/anime-ai-detect | saltacc | beit | 5 | 1,332 | transformers | 7 | image-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,266 | false |
# Anime AI Art Detect
A BEiT classifier to see if anime art was made by an AI or a human.
### Disclaimer
Like most AI models, this classifier is not 100% accurate. Please do not take the results of this model as fact.
The best version had a 96% accuracy distinguishing aibooru and the images from the imageboard sites. However, the success you have with this model will vary based on the images you are trying to classify.
Here are some biases I have noticed from my testing:
- Images on aibooru, the site where the AI images were taken from, were high quality AI generations. Low quality AI generations have a higher chance of being misclassified
- Textual inversions and hypernetworks increase the chance of misclassification
### Training
This model was trained from microsoft/beit-base-patch16-224 for one epoch on 11 thousand images from imageboard sites, and 11 thousand images from aibooru.
You can view the wandb run [here](https://wandb.ai/saltacc/huggingface/runs/2mp30x7j?workspace=user-saltacc).
### Use Case
I don't intend for this model to be more accurate than humans for detecting AI art.
I think the best use cases for this model would be for cases where misclassification isn't a big deal, such as
removing AI art from a training dataset. | 08e253f20fca6182189adcd9bc267c6b |
Helsinki-NLP/opus-mt-vi-ru | Helsinki-NLP | marian | 11 | 65 | transformers | 0 | translation | true | true | false | apache-2.0 | ['vi', 'ru'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,003 | false |
### vie-rus
* source group: Vietnamese
* target group: Russian
* OPUS readme: [vie-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.rus | 16.9 | 0.331 |
### System Info:
- hf_name: vie-rus
- source_languages: vie
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'ru']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: rus
- short_pair: vi-ru
- chrF2_score: 0.331
- bleu: 16.9
- brevity_penalty: 0.878
- ref_len: 2207.0
- src_name: Vietnamese
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: ru
- prefer_old: False
- long_pair: vie-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | d78fae59a6a43229848e3db4e553d9e3 |
ghatgetanuj/distilbert-base-uncased_cls_bbc-news | ghatgetanuj | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,540 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_cls_bbc-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Accuracy: 0.976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 77 | 0.2531 | 0.944 |
| No log | 2.0 | 154 | 0.0971 | 0.973 |
| No log | 3.0 | 231 | 0.0951 | 0.977 |
| No log | 4.0 | 308 | 0.1166 | 0.975 |
| No log | 5.0 | 385 | 0.1140 | 0.976 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 3bfdd074783eaee8c994557bb6485bdc |
sd-concepts-library/paolo-bonolis | sd-concepts-library | null | 9 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,060 | false | ### paolo bonolis on Stable Diffusion
This is the `<paolo-bonolis>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<paolo-bonolis> 0](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/3.jpeg)
![<paolo-bonolis> 1](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/1.jpeg)
![<paolo-bonolis> 2](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/0.jpeg)
![<paolo-bonolis> 3](https://huggingface.co/sd-concepts-library/paolo-bonolis/resolve/main/concept_images/2.jpeg)
| d451c2fe65cd1a793abafeb885cc5162 |
Tahsin-Mayeesha/distilbert-finetuned-fakenews | Tahsin-Mayeesha | distilbert | 12 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,542 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-fakenews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0049
- Accuracy: 0.9995
- F1: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0392 | 1.0 | 500 | 0.0059 | 0.999 | 0.999 |
| 0.002 | 2.0 | 1000 | 0.0047 | 0.9995 | 0.9995 |
| 0.0001 | 3.0 | 1500 | 0.0047 | 0.9995 | 0.9995 |
| 0.0001 | 4.0 | 2000 | 0.0049 | 0.9995 | 0.9995 |
| 0.0 | 5.0 | 2500 | 0.0049 | 0.9995 | 0.9995 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
| 2bf46d3bff64ff41189e2063fe829da2 |
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2 | anas-awadalla | bert | 16 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 5920fa037692a92921c0d626d3e099a5 |
PrimeQA/squad-v1-roberta-large | PrimeQA | roberta | 11 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['MRC', 'SQuAD 1.1', 'roberta-large'] | false | true | true | 1,996 | false |
# Model description
An RoBERTa reading comprehension model for [SQuAD 1.1](https://aclanthology.org/D16-1264/).
The model is initialized with [roberta-large](https://huggingface.co/roberta-large/) and fine-tuned on the [SQuAD 1.1 train data](https://huggingface.co/datasets/squad).
## Intended uses & limitations
You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, roberta-large, that we used may be present in our fine-tuned model, squad-v1-roberta-large.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb).
```bibtex
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
```bibtex
@article{DBLP:journals/corr/abs-1907-11692,
author = {Yinhan Liu and
Myle Ott and
Naman Goyal and
Jingfei Du and
Mandar Joshi and
Danqi Chen and
Omer Levy and
Mike Lewis and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach},
journal = {CoRR},
volume = {abs/1907.11692},
year = {2019},
url = {http://arxiv.org/abs/1907.11692},
archivePrefix = {arXiv},
eprint = {1907.11692},
timestamp = {Thu, 01 Aug 2019 08:59:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| 10caaec961fab91dfb7d1f118001d04b |
jed351/gpt2_base_zh-hk-shikoto | jed351 | gpt2 | 97 | 34 | transformers | 0 | text-generation | true | false | false | openrail | null | ['jed351/shikoto_zh_hk'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,253 | false |
# gpt2-shikoto
This model was trained on a dataset I obtained from an online novel site.
**Please be aware that the stories (training data) might contain inappropriate content. This model is intended for research purposes only.**
The base model can be found [here](https://huggingface.co/jed351/gpt2-base-zh-hk), which was obtained by
patching a [GPT2 Chinese model](https://huggingface.co/ckiplab/gpt2-base-chinese) and its tokenizer with Cantonese characters.
Refer to the base model for info on the patching process.
Besides language modeling, another aim of this experiment was to test the accelerate library by offloading certain workloads to CPU as well as finding the optimal training iterations.
The perplexity of this model is 16.12 after 400,000 steps. Comparing to the previous [attempt](https://huggingface.co/jed351/gpt2_tiny_zh-hk-shikoto) 27.02 after 400,000 steps.
It took around the same time duration to train this model but I only used 1 GPU here.
## Training procedure
Please refer to the [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling)
provided by Huggingface.
The model was trained for 400,000 steps on 1 NVIDIA Quadro RTX6000 for around 30 hours at the Research Computing Services of Imperial College London.
### How to use it?
```
from transformers import AutoTokenizer
from transformers import TextGenerationPipeline, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jed351/gpt2-base-zh-hk")
model = AutoModelForCausalLM.from_pretrained("jed351/gpt2_base_zh-hk-shikoto")
# try messing around with the parameters
generator = TextGenerationPipeline(model, tokenizer,
max_new_tokens=200,
no_repeat_ngram_size=3) #, device=0) #if you have a GPU
input_string = "your input"
output = generator(input_string)
string = output[0]['generated_text'].replace(' ', '')
print(string)
```
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2 | 9e2e3f32ca83e03a44e707078fa51d85 |
lmqg/t5-base-squad-qg-ae | lmqg | t5 | 40 | 92 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qg_squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question generation', 'answer extraction'] | true | true | true | 6,970 | false |
# Model Card of `lmqg/t5-base-squad-qg-ae`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-base-squad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-base-squad-qg-ae")
# answer extraction
answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
# question generation
question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 58.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 42.6 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 32.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 26.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 53.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.53 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.35 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 64.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 92.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 64.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 58.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 70.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 52.57 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 48.21 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 44.33 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 43.94 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 82.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 69.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: t5-base
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-squad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 8adb4cd15fd68dde85bad6ec8bef864f |
jhonparra18/wav2vec2-large-xls-r-300m-guarani-small | jhonparra18 | wav2vec2 | 15 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice', 'gn'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 1,551 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-guarani-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4964
- Wer: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.65 | 100 | 1.1326 | 1.0 |
| 1.6569 | 13.32 | 200 | 0.5264 | 0.6478 |
| 1.6569 | 19.97 | 300 | 0.5370 | 0.6261 |
| 0.2293 | 26.65 | 400 | 0.4964 | 0.5957 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| 89084ab4ada687b0d08974c0b690c2ff |
umanlp/TOD-XLMR | umanlp | xlm-roberta | 8 | 5 | transformers | 2 | fill-mask | true | false | false | mit | ['multilingual'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 1,976 | false |
# TOD-XLMR
TOD-XLMR is a conversationally specialized multilingual version based on [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base). It is pre-trained on English conversational corpora consisting of nine human-to-human multi-turn task-oriented dialog (TOD) datasets as proposed in the paper [TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue](https://aclanthology.org/2020.emnlp-main.66.pdf) by Wu et al. and first released in [this repository](https://huggingface.co/TODBERT).
The model is jointly trained with two objectives as proposed in TOD-BERT, including masked language modeling (MLM) and response contrastive loss (RCL). Masked language modeling is a common pretraining strategy utilized for BERT-based architectures, where a random sample of tokens in the input sequence is replaced with the special token [MASK] for predicting the original masked tokens. To further encourage the model to capture dialogic structure (i.e., dialog sequential order), response contrastive loss is implemented by using in-batch negative training with contrastive learning.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("umanlp/TOD-XLMR")
model = AutoModelForMaskedLM.from_pretrained("umanlp/TOD-XLMR")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
Or you can also use `AutoModel` to load the pretrained model and further apply to downstream tasks:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("umanlp/TOD-XLMR")
model = AutoModel("umanlp/TOD-XLMR")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
| 807a7568ac22dac3c1fed427331e6635 |
takizawa/xlm-roberta-base-finetuned-panx-all | takizawa | xlm-roberta | 10 | 8 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| a13bd67713eb6ed2e499fa4082dc6390 |
Axon/resnet50-v1 | Axon | null | 3 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | ['ImageNet'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Axon', 'Elixir'] | false | true | true | 3,463 | false |
# ResNet
This ResNet50 model was translated from the ONNX ResNetv1 model found
at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using [AxonOnnx](https://github.com/elixir-nx/axon_onnx)
The following description is copied from the relevant description at the ONNX repository.
## Use cases
These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.
ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches.
## Description
Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity.
## Model
ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers.
ResNet v1 uses post-activation for the residual blocks.
### Input
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224.
The inference was done using jpeg image.
### Preprocessing
The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing.
### Output
The model outputs image scores for each of the 1000 classes of ImageNet.
### Postprocessing
The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check [imagenet_postprocess.py](../imagenet_postprocess.py) for code.
## Dataset
Dataset used for train and validation: [ImageNet (ILSVRC2012)](http://www.image-net.org/challenges/LSVRC/2012/). Check [imagenet_prep](../imagenet_prep.md) for guidelines on preparing the dataset.
## References
* **ResNetv1**
[Deep residual learning for image recognition](https://arxiv.org/abs/1512.03385)
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
* **ONNX source model**
[onnx/models vision/classification/resnet resnet50-v1-7.onnx](https://github.com/onnx/models/tree/main/vision/classification/resnet/README)
| c955c219ad4d42cf0f281bbec9143501 |
Helsinki-NLP/opus-mt-lt-eo | Helsinki-NLP | marian | 11 | 14 | transformers | 0 | translation | true | true | false | apache-2.0 | ['lt', 'eo'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,990 | false |
### lit-epo
* source group: Lithuanian
* target group: Esperanto
* OPUS readme: [lit-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md)
* model: transformer-align
* source language(s): lit
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lit.epo | 13.0 | 0.313 |
### System Info:
- hf_name: lit-epo
- source_languages: lit
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'eo']
- src_constituents: {'lit'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt
- src_alpha3: lit
- tgt_alpha3: epo
- short_pair: lt-eo
- chrF2_score: 0.313
- bleu: 13.0
- brevity_penalty: 1.0
- ref_len: 70340.0
- src_name: Lithuanian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: lt
- tgt_alpha2: eo
- prefer_old: False
- long_pair: lit-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 3998f82e48c3f1c376d7c19c2b4f5bfd |
jonatasgrosman/exp_w2v2r_fr_vp-100k_age_teens-8_sixties-2_s607 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fr'] | false | true | true | 497 | false | # exp_w2v2r_fr_vp-100k_age_teens-8_sixties-2_s607
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| d177a512933125efe58013a57e1ca8a8 |
tomaccer/flan-t5-base-juraqanda | tomaccer | t5 | 13 | 0 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,792 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-juraqanda
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0784
- Rouge1: 9.5491
- Rouge2: 1.4927
- Rougel: 8.828
- Rougelsum: 9.2708
- Gen Len: 18.5260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 4.0303 | 1.0 | 712 | 3.3466 | 9.4455 | 1.2684 | 8.8558 | 9.1832 | 18.7577 |
| 3.6049 | 2.0 | 1424 | 3.1931 | 10.0714 | 1.4116 | 9.4163 | 9.8024 | 18.6461 |
| 3.3464 | 3.0 | 2136 | 3.1246 | 9.6542 | 1.4317 | 8.9441 | 9.36 | 18.5485 |
| 3.2831 | 4.0 | 2848 | 3.0910 | 9.6676 | 1.4584 | 8.9533 | 9.3876 | 18.6706 |
| 3.2176 | 5.0 | 3560 | 3.0784 | 9.5491 | 1.4927 | 8.828 | 9.2708 | 18.5260 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| de31b18e67a0f0c07135c5a640d9d6f3 |
virto/mt5-base-finetuned-rabbi-kook | virto | mt5 | 11 | 5 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,201 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-rabbi-kook
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2102 | 1.0 | 3567 | 2.4526 |
| 3.0283 | 2.0 | 7134 | 2.3861 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.11.0
| 5fe8c303b36bdb894f9e870bcde4795a |
bkim12/t5-small-finetuned-eli5 | bkim12 | t5 | 10 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['eli5'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,408 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eli5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6782
- Rouge1: 13.0163
- Rouge2: 1.9263
- Rougel: 10.484
- Rougelsum: 11.8234
- Gen Len: 18.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 3.8841 | 1.0 | 17040 | 3.6782 | 13.0163 | 1.9263 | 10.484 | 11.8234 | 18.9951 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.12.1
| b0f8c85527c9bf6243bc7b1ac6b05d27 |
sztanki/white-walker-style | sztanki | null | 37 | 7 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 2,458 | false | ### white-walker-style Dreambooth model trained by sztanki with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
white (use that on your prompt)
![white 0](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%281%29.jpg)![white 1](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%282%29.jpg)![white 2](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%283%29.jpg)![white 3](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%284%29.jpg)![white 4](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%285%29.jpg)![white 5](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%286%29.jpg)![white 6](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%287%29.jpg)![white 7](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%288%29.jpg)![white 8](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%289%29.jpg)![white 9](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2810%29.jpg)![white 10](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2811%29.jpg)![white 11](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2812%29.jpg)![white 12](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2813%29.jpg)![white 13](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2814%29.jpg)![white 14](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2815%29.jpg)![white 15](https://huggingface.co/sztanki/white-walker-style/resolve/main/concept_images/white_walker_style_%2816%29.jpg)
| 4b24d3bca1fd4927cc4cdd24b4c468ce |
DeepaKrish/roberta-base-squad2-finetuned | DeepaKrish | roberta | 13 | 6 | transformers | 0 | question-answering | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,223 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 27 | 0.0023 |
| No log | 2.0 | 54 | 0.0010 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.9.0
- Datasets 2.5.1
- Tokenizers 0.13.2
| 1038ace3d7ed3cfd19b6cce32a56bcad |
jraramhoej/whisper-small-lt-sr-v2 | jraramhoej | whisper | 19 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 951 | false |
# Whisper Small Lithuanian and Serbian sequentially trained
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
### Lithuanian
- Wer: >100
### Serbian
- Wer: 35.6131
## Training procedure
It was first trained 2000 steps on Lithuanian and then 2000 steps on Serbian, continuing from the last checkpoint for Lithuanian.
### Training hyperparameters per fine-tune
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| f079deb538048451c9a23166679d15f9 |
SherlockHolmes/ddpm-butterflies-128 | SherlockHolmes | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,236 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/SherlockHolmes/ddpm-butterflies-128/tensorboard?#scalars)
| 0d1ababa80f2d4d97660933f5b9d1e23 |
fathyshalab/all-roberta-large-v1-banking-16-16-5 | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,507 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-16-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7470
- Accuracy: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8182 | 1.0 | 1 | 2.7709 | 0.0356 |
| 2.6751 | 2.0 | 2 | 2.7579 | 0.0578 |
| 2.5239 | 3.0 | 3 | 2.7509 | 0.0622 |
| 2.4346 | 4.0 | 4 | 2.7470 | 0.0756 |
| 2.4099 | 5.0 | 5 | 2.7452 | 0.0756 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
| 11c4bea0e6c0601bfc564b39760a81ab |
tomekkorbak/dreamy_poitras | tomekkorbak | null | 2 | 0 | null | 0 | null | false | false | false | mit | ['en'] | ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 8,521 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dreamy_poitras
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.01,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.0},
'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'},
'num_additional_tokens': 2,
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'dreamy_poitras',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1b5oov3g | af2560deb5f15fa1dca9ef81ac72e0e8 |
gngpostalsrvc/BERiT_27000 | gngpostalsrvc | roberta | 11 | 8 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,840 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_27000
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.7297 | 0.19 | 500 | 8.5541 |
| 8.5592 | 0.39 | 1000 | 8.5536 |
| 8.4892 | 0.58 | 1500 | 8.5554 |
| 8.5288 | 0.77 | 2000 | 8.4786 |
| 8.5034 | 0.97 | 2500 | 8.4756 |
| 8.3497 | 1.16 | 3000 | 8.4821 |
| 8.4516 | 1.36 | 3500 | 8.4742 |
| 8.4224 | 1.55 | 4000 | 8.3972 |
| 8.3356 | 1.74 | 4500 | 8.4158 |
| 8.3805 | 1.94 | 5000 | 8.3800 |
| 8.2947 | 2.13 | 5500 | 8.4242 |
| 8.2475 | 2.32 | 6000 | 8.4334 |
| 8.2708 | 2.52 | 6500 | 8.3504 |
| 8.2559 | 2.71 | 7000 | 8.4211 |
| 8.3676 | 2.9 | 7500 | 8.3744 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| ff57e3b382b4f6de6a1d0a0810876e9d |
ireneisdoomed/clinical_trial_stop_reasons_custom | ireneisdoomed | bert | 13 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,199 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical_trial_stop_reasons_custom
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1448
- Accuracy Thresh: 0.9570
- F1 Micro: 0.5300
- F1 Macro: 0.1254
- Confusion Matrix: [[5940 15]
[ 270 150]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Thresh | F1 Micro | F1 Macro | Confusion Matrix |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:--------:|:--------------------------:|
| No log | 1.0 | 106 | 0.2812 | 0.8328 | 0.0 | 0.0 | [[5955 0]
[ 420 0]] |
| No log | 2.0 | 212 | 0.2189 | 0.9382 | 0.0 | 0.0 | [[5955 0]
[ 420 0]] |
| No log | 3.0 | 318 | 0.1840 | 0.9489 | 0.0 | 0.0 | [[5955 0]
[ 420 0]] |
| No log | 4.0 | 424 | 0.1638 | 0.9485 | 0.4940 | 0.0989 | [[5943 12]
[ 288 132]] |
| 0.239 | 5.0 | 530 | 0.1526 | 0.9533 | 0.5060 | 0.1018 | [[5943 12]
[ 277 143]] |
| 0.239 | 6.0 | 636 | 0.1467 | 0.9564 | 0.5077 | 0.1020 | [[5938 17]
[ 275 145]] |
| 0.239 | 7.0 | 742 | 0.1448 | 0.9570 | 0.5300 | 0.1254 | [[5940 15]
[ 270 150]] |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
| 05d074cacd121b04b4e0d879f6ae0be4 |
yuhuizhang/finetuned_gpt2_sst2_negation0.2_pretrainedFalse | yuhuizhang | gpt2 | 11 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,246 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2_sst2_negation0.2_pretrainedFalse
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9034 | 1.0 | 1072 | 5.5636 |
| 4.5404 | 2.0 | 2144 | 5.3854 |
| 4.368 | 3.0 | 3216 | 5.3370 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.13.1+cu117
- Datasets 2.5.2
- Tokenizers 0.12.1
| 29f3b953704ebca44f2728eb8bc23c39 |
IDEA-CCNL/Randeng-TransformerXL-1.1B-Paraphrasing-Chinese | IDEA-CCNL | null | 6 | 28 | transformers | 2 | null | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,818 | false |
# Randeng-Transformer-1.1B-Denoise
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
基于Transformer-XL的中文句子改写。
Paraphrase Chinese sentences based on Transformer-XL.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | Transformer | 1.1B | 中文-改写 Chinese-Paraphrase |
## 模型信息 Model Information
在悟道语料库(280G版本)和标注的相似句子对数据集上进行预训练。
The Transformer-XL model was pre-trained on the Wudao Corpus (with 280G samples) and annotated similar-sentence pair dataset.
## 使用 Usage
### 加载模型 Loading Models
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
```python
from fengshen.models.transfo_xl_paraphrase import TransfoXLModel
from transformers import T5Tokenizer as TransfoXLTokenizer
model = TransfoXLModel.from_pretrained('IDEA-CCNL/Randeng-TransformerXL-1.1B-Paraphrasing-Chinese')
tokenizer = TransfoXLTokenizer.from_pretrained('IDEA-CCNL/Randeng-TransformerXL-1.1B-Paraphrasing-Chinese',
eos_token = '<|endoftext|>',
extra_ids=0)
```
### 使用示例 Usage Examples
```python
from fengshen.models.transfo_xl_paraphrase import paraphrase_generatete
input_text = "年轻教师选择农村学校,还是县城学校?"
res = paraphrase_generate(model, tokenizer, input_text, device=0)
print(res)
#
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| 848a8cf1d5ae5cbfba038f975c4bfc0b |
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave | espnet | null | 17 | 1 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['ja'] | ['jsut'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,853 | false | ## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4381098/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 81bd86633f8906da14ed721328936003 |
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_wnli | gokuls | mobilebert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,631 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_pretrain_wnli
This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 1.0 | 5 | nan | 0.5634 |
| 0.0 | 2.0 | 10 | nan | 0.5634 |
| 0.0 | 3.0 | 15 | nan | 0.5634 |
| 0.0 | 4.0 | 20 | nan | 0.5634 |
| 0.0 | 5.0 | 25 | nan | 0.5634 |
| 0.0 | 6.0 | 30 | nan | 0.5634 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| f97fae05e8869f582f2acc4dc5a4dac9 |
voidful/asr_hubert_cluster_bart_base | voidful | bart | 10 | 5 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['librispeech'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'asr', 'hubert'] | false | true | true | 2,562 | false |
# voidful/asr_hubert_cluster_bart_base
## Usage
download file
```shell
wget https://raw.githubusercontent.com/voidful/hubert-cluster-code/main/km_feat_100_layer_20
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
```
Hubert kmeans code
```python
import joblib
import torch
from transformers import Wav2Vec2FeatureExtractor, HubertModel
import soundfile as sf
class HubertCode(object):
def __init__(self, hubert_model, km_path, km_layer):
self.processor = Wav2Vec2FeatureExtractor.from_pretrained(hubert_model)
self.model = HubertModel.from_pretrained(hubert_model)
self.km_model = joblib.load(km_path)
self.km_layer = km_layer
self.C_np = self.km_model.cluster_centers_.transpose()
self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True)
self.C = torch.from_numpy(self.C_np)
self.Cnorm = torch.from_numpy(self.Cnorm_np)
if torch.cuda.is_available():
self.C = self.C.cuda()
self.Cnorm = self.Cnorm.cuda()
self.model = self.model.cuda()
def __call__(self, filepath, sampling_rate=None):
speech, sr = sf.read(filepath)
input_values = self.processor(speech, return_tensors="pt", sampling_rate=sr).input_values
if torch.cuda.is_available():
input_values = input_values.cuda()
hidden_states = self.model(input_values, output_hidden_states=True).hidden_states
x = hidden_states[self.km_layer].squeeze()
dist = (
x.pow(2).sum(1, keepdim=True)
- 2 * torch.matmul(x, self.C)
+ self.Cnorm
)
return dist.argmin(dim=1).cpu().numpy()
```
input
```python
hc = HubertCode("facebook/hubert-large-ll60k", './km_feat_100_layer_20', 20)
voice_ids = hc('./sample1.flac')
```
bart model
````python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("voidful/asr_hubert_cluster_bart_base")
model = AutoModelForSeq2SeqLM.from_pretrained("voidful/asr_hubert_cluster_bart_base")
````
generate output
```python
gen_output = model.generate(input_ids=tokenizer("".join([f":vtok{i}:" for i in voice_ids]),return_tensors='pt').input_ids,max_length=1024)
print(tokenizer.decode(gen_output[0], skip_special_tokens=True))
```
## Result
`going along slushy country roads and speaking to damp audience in drifty school rooms day after day for a fortnight he'll have to put in an appearance at some place of worship on sunday morning and he can come to ask immediately afterwards`
| 98713a57a8897596bb488c011abe759c |
takizawa/xlm-roberta-base-finetuned-panx-fr | takizawa | xlm-roberta | 10 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 |
| 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 |
| 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 4f181cf957d8ea65f526de767d7df9c5 |
rossanez/t5-small-finetuned-de-en-256-lr2e-4 | rossanez | t5 | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['wmt14'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,169 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-de-en-256-lr2e-4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 188 | 2.1169 | 7.6948 | 17.4103 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 4c01d75c95f5e1becbbead76feab9750 |
gwz0202/ddpm-butterflied-128 | gwz0202 | null | 13 | 1 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/few-shot-pokemon'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,215 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflied-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/few-shot-pokemon` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/gwz0202/ddpm-butterflied-128/tensorboard?#scalars)
| e199738c3a29ab2f9338929adee82fb1 |
sshreshtha/vit-base-patch32-224-in21k-finetuned-eurosat | sshreshtha | vit | 13 | 16 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['food101'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,476 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch32-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6175
- Accuracy: 0.7321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6483 | 1.0 | 532 | 2.5574 | 0.6605 |
| 1.8885 | 2.0 | 1064 | 1.8063 | 0.7182 |
| 1.6371 | 3.0 | 1596 | 1.6175 | 0.7321 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 342511488ef5d9a04bd0005d085ce1f3 |
Langboat/mengzi-t5-base-mt | Langboat | t5 | 6 | 625 | transformers | 15 | text2text-generation | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,016 | false |
# Mengzi-T5-MT model
This is a Multi-Task model trained on the multitask mixture of 27 datasets and 301 prompts, based on [Mengzi-T5-base](https://huggingface.co/Langboat/mengzi-t5-base).
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base-mt")
model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base-mt")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 510aab3ff0cf225c7fb10d91c0acda24 |
prows12/wav2vec2-base-timit-demo-test_jong | prows12 | wav2vec2 | 12 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,018 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-test_jong
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
| 0752c6b9af06ecbc2a8bc8a264b14cfb |
erickfm/t5-base-finetuned-bias | erickfm | t5 | 7 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['WNC'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 490 | false |
This model is a fine-tune checkpoint of [T5-base](https://huggingface.co/t5-base), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.39 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-base).
| 18a263ee95e823383d45d2ed08619b57 |
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05 | lixiqi | beit | 19 | 43 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['image_folder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,943 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7881
- Accuracy: 0.7221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2307 | 1.0 | 224 | 1.0863 | 0.5874 |
| 1.0893 | 2.0 | 448 | 0.9700 | 0.6362 |
| 1.0244 | 3.0 | 672 | 0.8859 | 0.6757 |
| 1.016 | 4.0 | 896 | 0.8804 | 0.6787 |
| 0.9089 | 5.0 | 1120 | 0.8611 | 0.6897 |
| 0.8935 | 6.0 | 1344 | 0.8283 | 0.7028 |
| 0.8403 | 7.0 | 1568 | 0.8116 | 0.7102 |
| 0.8179 | 8.0 | 1792 | 0.7934 | 0.7166 |
| 0.7764 | 9.0 | 2016 | 0.7865 | 0.7208 |
| 0.771 | 10.0 | 2240 | 0.7881 | 0.7221 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| aa77eb57ac76e6111ebc339ca918807f |
stinoco/distilbert-base-uncased-finetuned-emotion | stinoco | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2284
- Accuracy: 0.9195
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8441 | 1.0 | 250 | 0.3260 | 0.9 | 0.8970 |
| 0.2551 | 2.0 | 500 | 0.2284 | 0.9195 | 0.9195 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| ad5f8c9a207a7fd932c02d3e4d1bf1d8 |
jonatasgrosman/exp_w2v2t_id_xlsr-53_s149 | jonatasgrosman | wav2vec2 | 10 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'id'] | false | true | true | 461 | false | # exp_w2v2t_id_xlsr-53_s149
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 7db6b398e50b9f39a390ef017720abf5 |
Lykon/DreamShaper | Lykon | null | 33 | 2,885 | diffusers | 64 | text-to-image | false | false | false | other | ['en'] | null | null | 10 | 0 | 1 | 9 | 2 | 0 | 2 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers'] | false | true | true | 620 | false |
# Dream Shaper
## Official Repository
Read more about this model here: https://civitai.com/models/4384/dreamshaper
You can run this model on:
- https://huggingface.co/spaces/Lykon/DreamShaper-webui
- https://sinkin.ai/m/4zdwGOB
Some sample output:
![sample 1](https://huggingface.co/Lykon/DreamShaper/resolve/main/1.png)
![sample 2](https://huggingface.co/Lykon/DreamShaper/resolve/main/2.png)
![sample 3](https://huggingface.co/Lykon/DreamShaper/resolve/main/3.png)
![sample 4](https://huggingface.co/Lykon/DreamShaper/resolve/main/4.png)
![sample 5](https://huggingface.co/Lykon/DreamShaper/resolve/main/5.png)
| 6b12ecdb5e52f7a94b04627714b0e87a |
hucruz/custom-textcat-model-viajes | hucruz | bert | 20 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification', 'generated_from_trainer'] | true | true | true | 1,481 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom-textcat-model
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the custom dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3305
- Accuracy: 0.9541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 209 | 0.3650 | 0.9514 |
| No log | 2.0 | 418 | 0.3371 | 0.9568 |
| 0.0108 | 3.0 | 627 | 0.3305 | 0.9541 |
| 0.0108 | 4.0 | 836 | 0.3465 | 0.9568 |
| 0.0056 | 5.0 | 1045 | 0.3498 | 0.9541 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| ea32d5e14f2ea16c3526d584f4eb7e43 |
sashketka/en_food_entity_extractor_v2 | sashketka | null | 25 | 158 | spacy | 0 | token-classification | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 2,711 | false | English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `en_food_entity_extractor_v2` |
| **Version** | `3.4.1` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (114 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `FOOD`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.93 |
| `TOKEN_P` | 99.57 |
| `TOKEN_R` | 99.58 |
| `TOKEN_F` | 99.57 |
| `TAG_ACC` | 97.34 |
| `SENTS_P` | 91.79 |
| `SENTS_R` | 89.14 |
| `SENTS_F` | 90.44 |
| `DEP_UAS` | 92.04 |
| `DEP_LAS` | 90.23 |
| `ENTS_P` | 85.35 |
| `ENTS_R` | 85.93 |
| `ENTS_F` | 85.64 | | 4bc497f1299e6b916c7bcad9cb32c5af |
Shenghao1993/xlm-roberta-base-finetuned-panx-it | Shenghao1993 | xlm-roberta | 9 | 13 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2403
- F1: 0.8358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7053 | 1.0 | 70 | 0.3077 | 0.7587 |
| 0.2839 | 2.0 | 140 | 0.2692 | 0.8007 |
| 0.1894 | 3.0 | 210 | 0.2403 | 0.8358 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| d688148aba1dc75fca23926a0235a120 |
dptrsa/ec_model | dptrsa | roberta | 20 | 53 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,235 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 497 | 1.1985 |
| 1.578 | 2.0 | 994 | 1.0032 |
| 1.187 | 3.0 | 1491 | 0.9479 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| ee490705b13c141cc3ba0945cc7f03be |
pere/nb-nn-translation | pere | null | 14 | 173 | null | 2 | translation | true | false | true | cc-by-4.0 | False | ['oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,927 | false |
# 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴
Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish.
The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language.
Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate.
## Demo
| | |
|---|---|
| Widget | Try the widget in the top right corner |
| Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) |
| | |
## Pretraining a T5-base
There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB.
## Finetuning - BLEU-SCORE 88.17 🎉
The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers.
Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training.
## This is not a translator
We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher.
## Future work
The following Google Docs Add-on is currently pending approval.
![Add-on](bm2nn_demo.gif)
## How to use the model
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-translation')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
``` | e79e2c6eb877c76263039bfaa607d137 |
cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab2 | cwchengtw | wav2vec2 | 11 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,790 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3738
- Wer: 0.3532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9022 | 3.7 | 400 | 0.6778 | 0.7414 |
| 0.4106 | 7.4 | 800 | 0.4123 | 0.5049 |
| 0.1862 | 11.11 | 1200 | 0.4260 | 0.4232 |
| 0.1342 | 14.81 | 1600 | 0.3951 | 0.4097 |
| 0.0997 | 18.51 | 2000 | 0.4100 | 0.3999 |
| 0.0782 | 22.22 | 2400 | 0.3918 | 0.3875 |
| 0.059 | 25.92 | 2800 | 0.3803 | 0.3698 |
| 0.0474 | 29.63 | 3200 | 0.3738 | 0.3532 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| ba810a8a5e0da8cd2ce5937aefe5fb39 |
Prang9/distilbert-base-uncased-finetuned-imdb | Prang9 | distilbert | 9 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| d63af670b27c7df680821f06c5912ae6 |
mmillet/rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis | mmillet | bert | 12 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,064 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny2_finetuned_emotion_experiment_augmented_anger_fear_no_emojis
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5820
- Accuracy: 0.7881
- F1: 0.7886
- Precision: 0.7906
- Recall: 0.7881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0996 | 1.0 | 69 | 1.0013 | 0.6879 | 0.6779 | 0.7070 | 0.6879 |
| 0.9524 | 2.0 | 138 | 0.8651 | 0.7265 | 0.7245 | 0.7322 | 0.7265 |
| 0.8345 | 3.0 | 207 | 0.7821 | 0.7422 | 0.7413 | 0.7445 | 0.7422 |
| 0.7573 | 4.0 | 276 | 0.7222 | 0.7484 | 0.7473 | 0.7482 | 0.7484 |
| 0.6923 | 5.0 | 345 | 0.6828 | 0.7568 | 0.7562 | 0.7562 | 0.7568 |
| 0.6412 | 6.0 | 414 | 0.6531 | 0.7568 | 0.7559 | 0.7556 | 0.7568 |
| 0.5982 | 7.0 | 483 | 0.6320 | 0.7610 | 0.7601 | 0.7597 | 0.7610 |
| 0.5593 | 8.0 | 552 | 0.6133 | 0.7651 | 0.7655 | 0.7664 | 0.7651 |
| 0.5183 | 9.0 | 621 | 0.6036 | 0.7714 | 0.7708 | 0.7709 | 0.7714 |
| 0.5042 | 10.0 | 690 | 0.5951 | 0.7756 | 0.7755 | 0.7760 | 0.7756 |
| 0.483 | 11.0 | 759 | 0.5878 | 0.7766 | 0.7768 | 0.7774 | 0.7766 |
| 0.4531 | 12.0 | 828 | 0.5855 | 0.7850 | 0.7841 | 0.7839 | 0.7850 |
| 0.4386 | 13.0 | 897 | 0.5828 | 0.7797 | 0.7790 | 0.7786 | 0.7797 |
| 0.4238 | 14.0 | 966 | 0.5788 | 0.7777 | 0.7780 | 0.7786 | 0.7777 |
| 0.4018 | 15.0 | 1035 | 0.5793 | 0.7839 | 0.7842 | 0.7855 | 0.7839 |
| 0.3998 | 16.0 | 1104 | 0.5801 | 0.7850 | 0.7844 | 0.7841 | 0.7850 |
| 0.3747 | 17.0 | 1173 | 0.5791 | 0.7839 | 0.7836 | 0.7833 | 0.7839 |
| 0.3595 | 18.0 | 1242 | 0.5799 | 0.7891 | 0.7891 | 0.7894 | 0.7891 |
| 0.3575 | 19.0 | 1311 | 0.5820 | 0.7881 | 0.7886 | 0.7906 | 0.7881 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| a9dd52aa859a36c5dfc520aa4e36b43f |
KoboldAI/OPT-2.7B-Erebus | KoboldAI | opt | 9 | 21,342 | transformers | 10 | text-generation | true | false | false | other | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,330 | false | # OPT 2.7B - Erebus
## Model description
This is the second generation of the original Shinen made by Mr. Seeker. The full dataset consists of 6 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The data can be divided in 6 different datasets:
- Literotica (everything with 4.5/5 or higher)
- Sexstories (everything with 90 or higher)
- Dataset-G (private dataset of X-rated stories)
- Doc's Lab (all stories)
- Pike Dataset (novels with "adult" rating)
- SoFurry (collection of various animals)
The dataset uses `[Genre: <comma-separated list of genres>]` for tagging.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/OPT-2.7B-Erebus')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
## Limitations and biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
### License
OPT-6.7B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### BibTeX entry and citation info
```
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | a86245e3e3b45ee168a60be14b4bfb76 |
projecte-aina/roberta-large-ca-paraphrase | projecte-aina | roberta | 11 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['ca'] | ['projecte-aina/Parafraseja'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['catalan', 'paraphrase', 'textual entailment'] | true | true | true | 4,581 | false |
# Catalan BERTa (roberta-large-ca-v2) finetuned for Paraphrase Detection
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-large-ca-paraphrase** is a Paraphrase Detection model for the Catalan language fine-tuned from the roberta-large-ca-v2 model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Intended uses and limitations
**roberta-large-ca-paraphrase** model can be used to detect if two sentences are in a paraphrase relation. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="projecte-aina/roberta-large-ca-paraphrase")
example = "Tinc un amic a Manresa. </s></s> A Manresa hi viu un amic meu."
paraphrase = nlp(example)
pprint(paraphrase)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the Paraphase Detection dataset in Catalan [Parafraseja](https://huggingface.co/datasets/projecte-aina/Parafraseja) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing the combined_score.
## Evaluation results
We evaluated the _roberta-large-ca-paraphrase_ on the Parafraseja test set against standard multilingual and monolingual baselines:
| Model | Parafraseja (combined_score) |
| ------------|:-------------|
| roberta-large-ca-v2 |**86.42** |
| roberta-base-ca-v2 |84.38 |
| mBERT | 79.66 |
| XLM-RoBERTa | 77.83 |
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation Information
NA
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. | ba3147b5c5bff3c4bd8bef0f86f77999 |
Sentdex/GPyT | Sentdex | gpt2 | 11 | 185 | transformers | 14 | text-generation | true | true | false | mit | ['Python'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['Code', 'GPyT', 'code generator'] | false | true | true | 1,907 | false |
GPyT is a GPT2 model trained from scratch (not fine tuned) on Python code from Github. Overall, it was ~80GB of pure Python code, the current GPyT model is a mere 2 epochs through this data, so it may benefit greatly from continued training and/or fine-tuning.
Newlines are replaced by `<N>`
Input to the model is code, up to the context length of 1024, with newlines replaced by `<N>`
Here's a quick example of using this model:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Sentdex/GPyT")
model = AutoModelWithLMHead.from_pretrained("Sentdex/GPyT")
# copy and paste some code in here
inp = """import"""
newlinechar = "<N>"
converted = inp.replace("\n", newlinechar)
tokenized = tokenizer.encode(converted, return_tensors='pt')
resp = model.generate(tokenized)
decoded = tokenizer.decode(resp[0])
reformatted = decoded.replace("<N>","\n")
print(reformatted)
```
Should produce:
```
import numpy as np
import pytest
import pandas as pd<N
```
This model does a ton more than just imports, however. For a bunch of examples and a better understanding of the model's capabilities:
https://pythonprogramming.net/GPT-python-code-transformer-model-GPyT/
Considerations:
1. This model is intended for educational and research use only. Do not trust model outputs.
2. Model is highly likely to regurgitate code almost exactly as it saw it. It's up to you to determine licensing if you intend to actually use the generated code.
3. All Python code was blindly pulled from github. This means included code is both Python 2 and 3, among other more subtle differences, such as tabs being 2 spaces in some cases and 4 in others...and more non-homologous things.
4. Along with the above, this means the code generated could wind up doing or suggesting just about anything. Run the generated code at own risk...it could be *anything*
| 38a80fa206d3f13768ecb491f86afae6 |
bharat-raghunathan/Tamil-Wav2Vec-xls-r-300m-Tamil-colab | bharat-raghunathan | wav2vec2 | 18 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'ta', 'robust-speech-event'] | true | true | true | 1,078 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tamil-Wav2Vec-xls-r-300m-Tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| e9cf15628a02e3311192f7955eb582cc |
NAWNIE/golden-hour-photography | NAWNIE | null | 25 | 12 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 1,429 | false | ### Golden_hour_photography Dreambooth model trained by NAWNIE with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
![0](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01122-{prompt}.png)
![1](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01139-{prompt}.png)
![2](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01138-{prompt}.png)
![3](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01140-{prompt}.png)
![4](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01123-{prompt}.png)
![5](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01120-{prompt}.png)
![6](https://huggingface.co/NAWNIE/golden-hour-photography/resolve/main/sample_images/01126-{prompt}.png)
| a652ede18d7baec1105af1f9c8b84656 |
HUPD/hupd-distilroberta-base | HUPD | roberta | 9 | 42 | transformers | 1 | fill-mask | true | false | false | cc-by-sa-4.0 | ['en'] | ['HUPD/hupd'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hupd', 'roberta', 'distilroberta', 'patents'] | false | true | true | 3,087 | false |
# HUPD DistilRoBERTa-Base Model
This HUPD DistilRoBERTa model was fine-tuned on the HUPD dataset with a masked language modeling objective. It was originally introduced in [this paper](TBD).
For more information about the Harvard USPTO Patent Dataset, please feel free to visit the [project website](https://patentdataset.org/) or the [project's GitHub repository](https://github.com/suzgunmirac/hupd).
### How to Use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
model = pipeline(task="fill-mask", model="hupd/hupd-distilroberta-base")
model("Improved <mask> for playing a game of thumb wrestling.")
```
Here is the output:
```python
[{'score': 0.4274042248725891,
'sequence': 'Improved method for playing a game of thumb wrestling.',
'token': 5448,
'token_str': ' method'},
{'score': 0.06967400759458542,
'sequence': 'Improved system for playing a game of thumb wrestling.',
'token': 467,
'token_str': ' system'},
{'score': 0.06849079579114914,
'sequence': 'Improved device for playing a game of thumb wrestling.',
'token': 2187,
'token_str': ' device'},
{'score': 0.04544765502214432,
'sequence': 'Improved apparatus for playing a game of thumb wrestling.',
'token': 26529,
'token_str': ' apparatus'},
{'score': 0.025765646249055862,
'sequence': 'Improved means for playing a game of thumb wrestling.',
'token': 839,
'token_str': ' means'}]
```
Alternatively, you can load the model and use it as follows:
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
# cuda/cpu
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("hupd/hupd-distilroberta-base")
model = AutoModelForMaskedLM.from_pretrained("hupd/hupd-distilroberta-base").to(device)
TEXT = "Improved <mask> for playing a game of thumb wrestling."
inputs = tokenizer(TEXT, return_tensors="pt").to(device)
with torch.no_grad():
logits = model(**inputs).logits
# retrieve indices of <mask>
mask_token_indxs = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
for mask_idx in mask_token_indxs:
predicted_token_id = logits[0, mask_idx].argmax(axis=-1)
output = tokenizer.decode(predicted_token_id)
print(f'Prediction for the <mask> token at index {mask_idx}: "{output}"')
```
Here is the output:
```python
Prediction for the <mask> token at index 2: " method"
```
## Citation
For more information, please take a look at the original paper.
* Paper: [The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications](TBD)
* Authors: *Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber*
* BibTeX:
```
@article{suzgun2022hupd,
title={The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications},
author={Suzgun, Mirac and Melas-Kyriazi, Luke and Sarkar, Suproteem K and Kominers, Scott and Shieber, Stuart},
year={2022}
}
``` | d1658aeee5f74e36666d10602da01bf3 |
joewoodworth/ddpm-butterflies-128 | joewoodworth | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,234 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/joewoodworth/ddpm-butterflies-128/tensorboard?#scalars)
| e451b873bd1975d7d03e67bab0d1fb4b |
lariskelmer/opus-mt-en-ro-finetuned-en-to-ro | lariskelmer | marian | 13 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['wmt16'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1505
- Gen Len: 34.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1505 | 34.1036 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 0c7b1c544a58d15e40ca47b97f7d22a2 |
anas-awadalla/roberta-base-few-shot-k-256-finetuned-squad-seed-6 | anas-awadalla | roberta | 17 | 7 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 983 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 242debf15dc411fb4bd8a17708025a05 |
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave | espnet | null | 19 | 6 | espnet | 0 | text-to-speech | false | false | false | cc-by-4.0 | ['ja'] | ['jsut'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'text-to-speech'] | false | true | true | 1,858 | false | ## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4032246/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | ede15f3aa5c3a08db391c18174a153b9 |
anas-awadalla/bert-medium-pretrained-finetuned-squad | anas-awadalla | bert | 13 | 8 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,124 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_medium_pretrain_squad
This model is a fine-tuned version of [anas-awadalla/bert-medium-pretrained-on-squad](https://huggingface.co/anas-awadalla/bert-medium-pretrained-on-squad) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- "exact_match": 77.95648060548723
- "f1": 85.85300366384631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 4eb4d5326d08d9725c7cd711bb5b3852 |
Chikashi/t5-small-finetuned-cnndm3-wikihow2 | Chikashi | t5 | 11 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['cnn_dailymail'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm3-wikihow2
This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm2-wikihow2](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm2-wikihow2) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6265
- Rouge1: 24.6704
- Rouge2: 11.9038
- Rougel: 20.3622
- Rougelsum: 23.2612
- Gen Len: 18.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8071 | 1.0 | 71779 | 1.6265 | 24.6704 | 11.9038 | 20.3622 | 23.2612 | 18.9997 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| a98580004b6782764aa2a8e65f5e4683 |
ConvLab/setsumbt-dst-sgd | ConvLab | null | 3 | 0 | null | 0 | null | false | false | false | apache-2.0 | ['en'] | ['ConvLab/sgd'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['roberta', 'classification', 'dialog state tracking', 'conversational system', 'task-oriented dialog'] | true | true | true | 753 | false |
# SetSUMBT-dst-sgd
This model is a fine-tuned version [SetSUMBT](https://github.com/ConvLab/ConvLab-3/tree/master/convlab/dst/setsumbt) of [roberta-base](https://huggingface.co/roberta-base) on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd).
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00001
- train_batch_size: 3
- eval_batch_size: 16
- seed: 0
- gradient_accumulation_steps: 1
- optimizer: AdamW
- lr_scheduler_type: linear
- num_epochs: 50.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0+cu110
- Datasets 2.3.2
- Tokenizers 0.12.1
| ca502d21021e9ba5fe2753c291cc5fbb |
SamSick/TriviaQA_NLP4Web_Group12 | SamSick | bert | 15 | 22 | transformers | 0 | question-answering | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 958 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on the TriviaQA Dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| a576b81c2d53d334e38dcb199a1724d8 |
rrustom/a-modern-house | rrustom | null | 22 | 4 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,238 | false | ### A modern house on Stable Diffusion via Dreambooth
#### model by rrustom
This your the Stable Diffusion model fine-tuned the A modern house concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks modern home**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
![image 0](https://huggingface.co/rrustom/a-modern-house/resolve/main/concept_images/1.jpeg)
![image 1](https://huggingface.co/rrustom/a-modern-house/resolve/main/concept_images/2.jpeg)
![image 2](https://huggingface.co/rrustom/a-modern-house/resolve/main/concept_images/0.jpeg)
![image 3](https://huggingface.co/rrustom/a-modern-house/resolve/main/concept_images/3.jpeg)
| b5153d80337376bf3aaca7f692fa5710 |
anuragshas/wav2vec2-xls-r-300m-bn-cv9-with-lm | anuragshas | wav2vec2 | 25 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['bn'] | ['mozilla-foundation/common_voice_9_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_9_0', 'generated_from_trainer'] | true | true | true | 2,799 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - BN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2297
- Wer: 0.2850
- Cer: 0.0660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8692
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.675 | 2.3 | 400 | 3.5052 | 1.0 | 1.0 |
| 3.0446 | 4.6 | 800 | 2.2759 | 1.0052 | 0.5215 |
| 1.7276 | 6.9 | 1200 | 0.7083 | 0.6697 | 0.1969 |
| 1.5171 | 9.2 | 1600 | 0.5328 | 0.5733 | 0.1568 |
| 1.4176 | 11.49 | 2000 | 0.4571 | 0.5161 | 0.1381 |
| 1.343 | 13.79 | 2400 | 0.3910 | 0.4522 | 0.1160 |
| 1.2743 | 16.09 | 2800 | 0.3534 | 0.4137 | 0.1044 |
| 1.2396 | 18.39 | 3200 | 0.3278 | 0.3877 | 0.0959 |
| 1.2035 | 20.69 | 3600 | 0.3109 | 0.3741 | 0.0917 |
| 1.1745 | 22.99 | 4000 | 0.2972 | 0.3618 | 0.0882 |
| 1.1541 | 25.29 | 4400 | 0.2836 | 0.3427 | 0.0832 |
| 1.1372 | 27.59 | 4800 | 0.2759 | 0.3357 | 0.0812 |
| 1.1048 | 29.89 | 5200 | 0.2669 | 0.3284 | 0.0783 |
| 1.0966 | 32.18 | 5600 | 0.2678 | 0.3249 | 0.0775 |
| 1.0747 | 34.48 | 6000 | 0.2547 | 0.3134 | 0.0748 |
| 1.0593 | 36.78 | 6400 | 0.2491 | 0.3077 | 0.0728 |
| 1.0417 | 39.08 | 6800 | 0.2450 | 0.3012 | 0.0711 |
| 1.024 | 41.38 | 7200 | 0.2402 | 0.2956 | 0.0694 |
| 1.0106 | 43.68 | 7600 | 0.2351 | 0.2915 | 0.0681 |
| 1.0014 | 45.98 | 8000 | 0.2328 | 0.2896 | 0.0673 |
| 0.9999 | 48.28 | 8400 | 0.2318 | 0.2866 | 0.0667 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
| 4c0b1b3e507208b7e4c30062ae0d1d96 |
Fictiverse/Stable_Diffusion_FluidArt_Model | Fictiverse | null | 18 | 128 | diffusers | 22 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 5 | 0 | 5 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,028 | false | # Fluid Art model V1
This is the fine-tuned Stable Diffusion model trained on Fluid Art images.
Use **FluidArt** in your prompts.
### Sample images:
![FluidArt sample](https://s3.amazonaws.com/moonup/production/uploads/1667898583757-635749860725c2f190a76e88.jpeg)
Based on StableDiffusion 1.5 model
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Fictiverse/Stable_Diffusion_PaperCut_Model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "PaperCut R2-D2"
image = pipe(prompt).images[0]
image.save("./R2-D2.png")
``` | 6244cd57e7c6b713a506c94964e49d97 |
AdarshRavis/BabishBot | AdarshRavis | gpt2 | 9 | 2 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 605 | false |
This is a text generation algorithm that is fine-tuned on subtitles from Binging with Babish (https://www.youtube.com/c/bingingwithbabish)
Just type in your starting sentence, click "compute" and see what the model has to say! The first time you run the model, it may take a minute to load (after that it takes ~6 seconds to run)
This is created with the help of aitextgen (https://github.com/minimaxir/aitextgen), using a pertained 124M gpt-2 model
Disclaimer:
The use of this model is for parody only, and is not affiliated with Binging with Babish or the Babish Culinary Universe. | e0a34eb2ee6fc0cb728ab4c95d001611 |
adi1494/distilbert-base-uncased-finetuned-squad | adi1494 | distilbert | 12 | 3 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,336 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adi1494/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5671
- Validation Loss: 1.2217
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5532, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5671 | 1.2217 | 0 |
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
| 56a822fac743f575f9fa427859b19646 |
Sjdan/finetuning12 | Sjdan | wav2vec2 | 23 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,562 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning12
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0 | 0.31 | 500 | nan | 1.0 |
| 0.0 | 0.61 | 1000 | nan | 1.0 |
| 0.0 | 0.92 | 1500 | nan | 1.0 |
| 0.0 | 1.23 | 2000 | nan | 1.0 |
| 0.0 | 1.54 | 2500 | nan | 1.0 |
| 0.0 | 1.84 | 3000 | nan | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 67253dcde5136c975064f1cc39a7cd8e |
yuhuizhang/finetuned_gpt2-medium_sst2_negation0.5 | yuhuizhang | gpt2 | 11 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,251 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-medium_sst2_negation0.5
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7891 | 1.0 | 1092 | 3.2810 |
| 2.5081 | 2.0 | 2184 | 3.3508 |
| 2.3572 | 3.0 | 3276 | 3.4090 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
| 79da8d85ec21dd38b78abea0df0ad0a7 |
sd-dreambooth-library/alberto-pablo | sd-dreambooth-library | null | 33 | 4 | diffusers | 1 | null | false | false | false | mit | null | null | null | 3 | 3 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,459 | false | ### Alberto_Pablo on Stable Diffusion via Dreambooth
#### model by Ganosh
This your the Stable Diffusion model fine-tuned the Alberto_Pablo concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks Alberto**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
![image 0](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/11.jpeg)
![image 1](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/4.jpeg)
![image 2](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/9.jpeg)
![image 3](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/0.jpeg)
![image 4](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/2.jpeg)
![image 5](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/14.jpeg)
![image 6](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/7.jpeg)
![image 7](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/13.jpeg)
![image 8](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/3.jpeg)
![image 9](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/10.jpeg)
![image 10](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/8.jpeg)
![image 11](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/1.jpeg)
![image 12](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/12.jpeg)
![image 13](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/6.jpeg)
![image 14](https://huggingface.co/sd-dreambooth-library/alberto-pablo/resolve/main/concept_images/5.jpeg)
| e84b8659ba7e0befce83e4912f6aae9e |
hassnain/wav2vec2-base-timit-demo-colab9 | hassnain | wav2vec2 | 12 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,432 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1922
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 5.0683 | 1.42 | 500 | 3.2471 | 1.0 |
| 3.1349 | 2.85 | 1000 | 3.2219 | 1.0 |
| 3.1317 | 4.27 | 1500 | 3.2090 | 1.0 |
| 3.1262 | 5.7 | 2000 | 3.2152 | 1.0 |
| 3.1307 | 7.12 | 2500 | 3.2147 | 1.0 |
| 3.1264 | 8.55 | 3000 | 3.2072 | 1.0 |
| 3.1279 | 9.97 | 3500 | 3.2158 | 1.0 |
| 3.1287 | 11.4 | 4000 | 3.2190 | 1.0 |
| 3.1256 | 12.82 | 4500 | 3.2069 | 1.0 |
| 3.1254 | 14.25 | 5000 | 3.2134 | 1.0 |
| 3.1259 | 15.67 | 5500 | 3.2231 | 1.0 |
| 3.1269 | 17.09 | 6000 | 3.2005 | 1.0 |
| 3.1279 | 18.52 | 6500 | 3.1988 | 1.0 |
| 3.1246 | 19.94 | 7000 | 3.1929 | 1.0 |
| 3.128 | 21.37 | 7500 | 3.1864 | 1.0 |
| 3.1245 | 22.79 | 8000 | 3.1868 | 1.0 |
| 3.1266 | 24.22 | 8500 | 3.1852 | 1.0 |
| 3.1239 | 25.64 | 9000 | 3.1855 | 1.0 |
| 3.125 | 27.07 | 9500 | 3.1917 | 1.0 |
| 3.1233 | 28.49 | 10000 | 3.1929 | 1.0 |
| 3.1229 | 29.91 | 10500 | 3.1922 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 0c6d32da57bf44f7549376d8e516f3cb |
kevinbror/distilbertbaseuncasedz | kevinbror | distilbert | 4 | 6 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,349 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbertbaseuncasedz
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5368
- Train End Logits Accuracy: 0.8401
- Train Start Logits Accuracy: 0.8078
- Validation Loss: 1.2427
- Validation End Logits Accuracy: 0.7050
- Validation Start Logits Accuracy: 0.6725
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.3338 | 0.6448 | 0.6045 | 1.1322 | 0.6906 | 0.6563 | 0 |
| 0.9044 | 0.7466 | 0.7090 | 1.0996 | 0.7032 | 0.6720 | 1 |
| 0.6756 | 0.8042 | 0.7680 | 1.1416 | 0.7047 | 0.6718 | 2 |
| 0.5368 | 0.8401 | 0.8078 | 1.2427 | 0.7050 | 0.6725 | 3 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| 3b8fddc9c4a1c02e1ca32f2624cdfccb |
Rocketknight1/mt5-small-finetuned-amazon-en-es | Rocketknight1 | mt5 | 8 | 4 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,374 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.2613
- Validation Loss: 4.5342
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2613 | 4.5342 | 0 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
| 40266767b535f2334ebabb77293a3b71 |
Geotrend/bert-base-sw-cased | Geotrend | bert | 8 | 5 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['sw'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,283 | false |
# bert-base-sw-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-sw-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-sw-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
| 2d7b4fef410848dc0cf405d251f42237 |
DOOGLAK/Article_50v2_NER_Model_3Epochs_UNAUGMENTED | DOOGLAK | bert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['article50v2_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,550 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_50v2_NER_Model_3Epochs_UNAUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v2_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7694
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 6 | 0.9910 | 0.1161 | 0.0044 | 0.0085 | 0.7766 |
| No log | 2.0 | 12 | 0.8031 | 0.0 | 0.0 | 0.0 | 0.7776 |
| No log | 3.0 | 18 | 0.7694 | 0.0 | 0.0 | 0.0 | 0.7776 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| 0d58212dbe11082a5b3a31c0bcf954a4 |
adamwatters/rblx-character | adamwatters | null | 17 | 19 | diffusers | 5 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard'] | false | true | true | 1,155 | false |
# DreamBooth model for the rblx concept trained by adamwatters on the adamwatters/roblox-guy dataset.
## Description
<figure>
<img src=https://datasets-server.huggingface.co/assets/adamwatters/roblox-guy/--/adamwatters--roblox-guy/train/7/image/image.jpg width=200px height=200px>
<figcaption align = "left"><b>Screenshot from Roblox used for training</b></figcaption>
</figure>
This is a Stable Diffusion model fine-tuned on images of my specific customized Roblox avatar. Idea is: maybe it would be fun for Roblox players to make images of their avatars in different settings.
It can be used by modifying the instance_prompt: a photo of rblx character
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Generate Images
<img src=https://huggingface.co/datasets/adamwatters/hosted-images/resolve/main/roblox-guy-grid.jpeg width=60%>
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('adamwatters/rblx-character')
image = pipeline().images[0]
image
```
| e41969755723d7af462b127e21a69885 |
Hazzzardous/RWKV-8Bit | Hazzzardous | null | 4 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 703 | false | ## Example usage
```
from rwkvstic.load import RWKV
# Load the model (supports full path, relative path, and remote paths)
model = RWKV(
"https://huggingface.co/Hazzzardous/RWKV-8Bit/resolve/main/RWKV-4-Pile-7B-Instruct.pqth"
)
model.loadContext(newctx=f"Q: who is Jim Butcher?\n\nA:")
output = model.forward(number=100)["output"]
print(output)
# Q: who is Jim Butcher?
# A: Jim Butcher is a very popular American author of fantasy novels. He’s known for the Dresden Files series of novels.<|endoftext|>
```
## More details here
https://pypi.org/project/rwkvstic/
## Run example notebook
https://colab.research.google.com/github/harrisonvanderbyl/rwkvstic/blob/master/notebooks/chatbot.ipynb
| 4fe59b0e3040b36dd3d4ad801261fcef |
jonatasgrosman/exp_w2v2t_fa_unispeech-ml_s408 | jonatasgrosman | unispeech | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fa'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fa'] | false | true | true | 500 | false | # exp_w2v2t_fa_unispeech-ml_s408
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 64846d0050c8f6b53984dc738f1e9957 |
Huyen2310/FPT25000 | Huyen2310 | whisper | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['hi'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 1,034 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| aefa986a6abf45ce1613e8428f5509e3 |
gayanin/bart-mlm-pubmed-medterm | gayanin | bart | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,386 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-medterm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge2 Precision: 0.985
- Rouge2 Recall: 0.7208
- Rouge2 Fmeasure: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0018 | 1.0 | 13833 | 0.0003 | 0.985 | 0.7208 | 0.8088 |
| 0.0014 | 2.0 | 27666 | 0.0006 | 0.9848 | 0.7207 | 0.8086 |
| 0.0009 | 3.0 | 41499 | 0.0002 | 0.9848 | 0.7207 | 0.8086 |
| 0.0007 | 4.0 | 55332 | 0.0002 | 0.985 | 0.7208 | 0.8088 |
| 0.0006 | 5.0 | 69165 | 0.0001 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 6.0 | 82998 | 0.0002 | 0.9846 | 0.7206 | 0.8086 |
| 0.0009 | 7.0 | 96831 | 0.0001 | 0.9848 | 0.7208 | 0.8087 |
| 0.0 | 8.0 | 110664 | 0.0000 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 9.0 | 124497 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
| 0.0 | 10.0 | 138330 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 4715810c56415ac2b187e6b7c58358ba |
gokuls/distilbert_sa_GLUE_Experiment_qnli | gokuls | distilbert | 23 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,676 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Accuracy: 0.6077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6767 | 1.0 | 410 | 0.6560 | 0.6041 |
| 0.644 | 2.0 | 820 | 0.6530 | 0.6077 |
| 0.6141 | 3.0 | 1230 | 0.6655 | 0.6074 |
| 0.5762 | 4.0 | 1640 | 0.7018 | 0.5940 |
| 0.5144 | 5.0 | 2050 | 0.7033 | 0.5934 |
| 0.4324 | 6.0 | 2460 | 0.8714 | 0.5817 |
| 0.3483 | 7.0 | 2870 | 1.0825 | 0.5847 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| 6b2219161e0d3dec145dfe7d007d7876 |
anas-awadalla/bart-large-finetuned-squad-infilling-lr-3e-5-decay-001 | anas-awadalla | bart | 18 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,069 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-squad-infilling-lr-3e-5-decay-001
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| fbfe3c8a8e6174710869b4081d8062d3 |
justin871030/bert-base-uncased-goemotions-group-finetuned | justin871030 | bert | 8 | 6 | transformers | 0 | text-classification | true | false | false | mit | ['en'] | ['go_emotions'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['go-emotion', 'text-classification', 'pytorch'] | false | true | true | 419 | false |
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 70%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions) | 32266711314acc958e48a5d45624e79a |
chmanoj/xls-r-1B-te | chmanoj | wav2vec2 | 33 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['te'] | ['openslr', 'SLR66'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'openslr_SLR66', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 3,730 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the OPENSLR_SLR66 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3119
- Wer: 0.2613
### Evaluation metrics
| Metric | Split | Decode with LM | Value |
|:------:|:------:|:--------------:|:---------:|
| WER | Train | No | 5.36 |
| CER | Train | No | 1.11 |
| WER | Test | No | 26.14 |
| CER | Test | No | 4.93 |
| WER | Train | Yes | 5.04 |
| CER | Train | Yes | 1.07 |
| WER | Test | Yes | 20.69 |
| CER | Test | Yes | 3.986 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.9038 | 4.8 | 500 | 3.0125 | 1.0 |
| 1.3777 | 9.61 | 1000 | 0.8681 | 0.8753 |
| 1.1436 | 14.42 | 1500 | 0.6256 | 0.7961 |
| 1.0997 | 19.23 | 2000 | 0.5244 | 0.6875 |
| 1.0363 | 24.04 | 2500 | 0.4585 | 0.6276 |
| 0.7996 | 28.84 | 3000 | 0.4072 | 0.5295 |
| 0.825 | 33.65 | 3500 | 0.3590 | 0.5222 |
| 0.8018 | 38.46 | 4000 | 0.3678 | 0.4671 |
| 0.7545 | 43.27 | 4500 | 0.3474 | 0.3962 |
| 0.7375 | 48.08 | 5000 | 0.3224 | 0.3869 |
| 0.6198 | 52.88 | 5500 | 0.3233 | 0.3630 |
| 0.6608 | 57.69 | 6000 | 0.3029 | 0.3308 |
| 0.645 | 62.5 | 6500 | 0.3195 | 0.3722 |
| 0.5249 | 67.31 | 7000 | 0.3004 | 0.3202 |
| 0.4875 | 72.11 | 7500 | 0.2826 | 0.2992 |
| 0.5171 | 76.92 | 8000 | 0.2962 | 0.2976 |
| 0.4974 | 81.73 | 8500 | 0.2990 | 0.2933 |
| 0.4387 | 86.54 | 9000 | 0.2834 | 0.2755 |
| 0.4511 | 91.34 | 9500 | 0.2886 | 0.2787 |
| 0.4112 | 96.15 | 10000 | 0.3093 | 0.2976 |
| 0.4064 | 100.96 | 10500 | 0.3123 | 0.2863 |
| 0.4047 | 105.77 | 11000 | 0.2968 | 0.2719 |
| 0.3519 | 110.57 | 11500 | 0.3106 | 0.2832 |
| 0.3719 | 115.38 | 12000 | 0.3030 | 0.2737 |
| 0.3669 | 120.19 | 12500 | 0.2964 | 0.2714 |
| 0.3386 | 125.0 | 13000 | 0.3101 | 0.2714 |
| 0.3137 | 129.8 | 13500 | 0.3063 | 0.2710 |
| 0.3008 | 134.61 | 14000 | 0.3082 | 0.2617 |
| 0.301 | 139.42 | 14500 | 0.3121 | 0.2628 |
| 0.3291 | 144.23 | 15000 | 0.3105 | 0.2612 |
| 0.3133 | 149.04 | 15500 | 0.3114 | 0.2624 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| 2c375c88965d915893fe8eb80edb1cc1 |
YoungMasterFromSect/Ton_Inf | YoungMasterFromSect | null | 10 | 0 | null | 3 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 783 | false | Be aware: Model is heavly overfitted, merge needed. Best use probably is to merge with something else for a style change. Will upload other version later on that should be better
Sample images:
<style>
img {
display: inline-block;
}
</style>
<img src="https://huggingface.co/YoungMasterFromSect/Ton_Inf/resolve/main/1.png" width="300" height="200">
<img src="https://huggingface.co/YoungMasterFromSect/Ton_Inf/resolve/main/2.png" width="300" height="200">
<img src="https://huggingface.co/YoungMasterFromSect/Ton_Inf/resolve/main/3.png" width="300" height="300">
<img src="https://huggingface.co/YoungMasterFromSect/Ton_Inf/resolve/main/4.png" width="300" height="300">
<img src="https://huggingface.co/YoungMasterFromSect/Ton_Inf/resolve/main/5.png" width="500" height="500">
| cbfe3470dd29dee3bc9738381274dec9 |
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-6 | anas-awadalla | roberta | 17 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-32-finetuned-squad-seed-6
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 529409430b30839749a43af03a1b393b |
irateas/conceptart | irateas | null | 6 | 0 | null | 12 | null | false | false | false | openrail | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 4,310 | false |
# Conceptart embedding version 1.0
This model is made for Stable Diffusion 2.0 `checkpoint 768`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://i.imgur.com/1l48vSo.png">
</div>
### For whom is this model?
This model is targeted toward people who would love to create a more artistic stuff in SD,
to get a cool logo, or stickers concepts, or baseline for an amazing poster. For sure as well
for concept artists needing inspiration or indie game dev - who might need some assets.
This embedding will be useful as well for all fans of bording games/table top rpg-s.
### How to use it?
Simply drop the conceptart-x file (where `x` is a number of training steps) into the folder named `embeddings`. It should appear in your
SD instance main folder. In your prompt just type in: "XYZ something in style of `conceptart-x`". This is just an example. The most important part is the `conceptart-x`.
I would recommend you to first try each of them as they all might behave a bit different.
### Issues
Currently, the model has some issues. It tends to have grayish/dull colors sometimes. The object's elements are not ideally coherent.
The improvements will come with future versions. You might expect them in the following weeks.
### The strengths
One of the biggest strengths of this model is pure creativity and out of the box with proper prompting a good quality of output.
The strongest part of the model is a good quality improvement with img2img.
I think ofthen the usual workflow will look as following (ideas):
1. You prompt-craft and create cool designs,
2. You select ones you like (sometimes smaller objects/elements/designs from the output)
3. You go to img2img to get more variations, or you select a smaller element that you like and you generate a bigger version of it. Then
you improve on the new one up until you are satisfied.
4. You use another embedding to get a surprisingly amazing output! Or you already have a design you like!
5. At The same time you might like to keep the design and upscale it to get a great resolution.
### Examples
***Basketballs*** with japanese dragons on them:
I have used the one of the outputs, selected the object I liked with the rectangle took in img2img authomatic1111 ui, and went throught two img 2 img iterations to get the output.
Prompt:
`((basketball ball covered in colourful tattoo of a dragons and underground punk stickers)), illustration in style of conceptart-200, oil painting style
Negative prompt: bad anatomy, unrealistic, abstract, random, amateur photography, blurred, underwater shot, watermark, logo, demon eyes, plastic skin, ((text))
Steps: 30, Sampler: Euler a, CFG scale: 11.5, Seed: 719571754, Size: 832x832, Model hash: 2c02b20a, Denoising strength: 0.91, Mask blur: 4`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://preview.redd.it/lxsqj6oayd3a1.png?width=1664&format=png&auto=webp&s=875129c03f166aa129f3d37b24f1b919d568d7b3">
</div>
***Anime demons***
Just one extra refinement in img2img.
Prompt:
`colored illustration of dark beast pokemon in style of conceptart-200, [bright colors]
Negative prompt: bad anatomy, unrealistic, abstract, cartoon, random, amateur photography, blurred, underwater shot, watermark, logo, demon eyes, plastic skin, ((text)), ((multiple characters)) ((desaturated colors))
Steps: 24, Sampler: DDIM, CFG scale: 11.5, Seed: 1001839889, Size: 704x896, Model hash: 2c02b20a`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://i.imgur.com/KBt2mWB.png">
</div>
***Cave entrance***
Straight out comparison between the different embeedings. At the end result with vanilla SD 2.0 768
Prompt:
`colored illustration of dark cave entrance in style of conceptart-200, ((bright background)), ((bright colors))
Negative prompt: bad anatomy, unrealistic, abstract, cartoon, random, amateur photography, blurred, underwater shot, watermark, logo, demon eyes, plastic skin, ((text)), ((multiple characters)) ((desaturated colors))
Steps: 24, Sampler: DDIM, CFG scale: 8, Seed: 1479340448, Size: 768x768, Model hash: 2c02b20a`
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://i.imgur.com/6MtiGUs.jpg">
</div>
Enjoy! Hope you will find it helpful!
| 33e1434e576525fae7e1f2e52cab768f |
stanfordnlp/stanza-de | stanfordnlp | null | 21 | 1,195 | stanza | 2 | token-classification | false | false | false | apache-2.0 | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stanza', 'token-classification'] | false | true | true | 579 | false | # Stanza model for German (de)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-10-26 21:18:21.275
| 864e019a57372d2b97409ec520d0e62c |
tftransformers/gpt2 | tftransformers | null | 6 | 5 | null | 0 | null | false | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 5,366 | false |
# GPT-2
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
from tf_transformers.models import GPT2Model
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained("gpt2")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
outputs_tf = model(inputs_tf)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a> | 4b4e655ebb6b0be37cedebfc38e6c514 |
mrm8488/flan-t5-base-finetuned-openai-summarize_from_feedback | mrm8488 | t5 | 12 | 185 | transformers | 9 | text2text-generation | true | false | false | apache-2.0 | null | ['summarize_from_feedback'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,973 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-openai-summarize_from_feedback
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the summarize_from_feedback dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8833
- Rouge1: 29.3494
- Rouge2: 10.9406
- Rougel: 23.9907
- Rougelsum: 25.461
- Gen Len: 18.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.7678 | 1.0 | 5804 | 1.8833 | 29.3494 | 10.9406 | 23.9907 | 25.461 | 18.9265 |
| 1.5839 | 2.0 | 11608 | 1.8992 | 29.6239 | 11.1795 | 24.2927 | 25.7183 | 18.9358 |
| 1.4812 | 3.0 | 17412 | 1.8929 | 29.8899 | 11.2855 | 24.4193 | 25.9219 | 18.9189 |
| 1.4198 | 4.0 | 23216 | 1.8939 | 29.8897 | 11.2606 | 24.3262 | 25.8642 | 18.9309 |
| 1.3612 | 5.0 | 29020 | 1.9105 | 29.8469 | 11.2112 | 24.2483 | 25.7884 | 18.9396 |
| 1.3279 | 6.0 | 34824 | 1.9170 | 30.038 | 11.3426 | 24.4385 | 25.9675 | 18.9328 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| cad131c7d9f51f557bd5d74b4df31406 |