repo_id
stringlengths 4
110
| author
stringlengths 2
27
โ | model_type
stringlengths 2
29
โ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
โ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
โ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
โ | datasets
stringlengths 2
2.58k
โ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gabrieleai/gamindocar-3000-700 | gabrieleai | null | 15 | 31 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 624 | false | ### Gamindocar-3000-700 Dreambooth model trained by gabrieleai with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
| b1249a1d409e9f73bed623272b4379d7 |
francisco-perez-sorrosal/distilbert-base-uncased-finetuned-with-spanish-tweets-clf | francisco-perez-sorrosal | distilbert | 10 | 171 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['dataset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,660 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-with-spanish-tweets-clf
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0580
- Accuracy: 0.5701
- F1: 0.5652
- Precision: 0.5666
- Recall: 0.5642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0643 | 1.0 | 543 | 1.0457 | 0.4423 | 0.2761 | 0.5104 | 0.3712 |
| 0.9754 | 2.0 | 1086 | 0.9700 | 0.5155 | 0.4574 | 0.5190 | 0.4712 |
| 0.8145 | 3.0 | 1629 | 0.9691 | 0.5556 | 0.5544 | 0.5616 | 0.5506 |
| 0.6318 | 4.0 | 2172 | 1.0580 | 0.5701 | 0.5652 | 0.5666 | 0.5642 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 8e576a6e120e41c109192088dbb45963 |
jy46604790/Fake-News-Bert-Detect | jy46604790 | roberta | 8 | 198 | transformers | 2 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,108 | false |
# Fake News Recognition
## Overview
This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically).
LABEL_0: Fake news
LABEL_1: Real news
## Qucik Tutorial
### Download The Model
```python
from transformers import pipeline
MODEL = "jy46604790/Fake-News-Bert-Detect"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```python
text = "Indonesian police have recaptured a U.S. citizen who escaped a week ago from an overcrowded prison on the holiday island of Bali, the jail s second breakout of foreign inmates this year. Cristian Beasley from California was rearrested on Sunday, Badung Police chief Yudith Satria Hananta said, without providing further details. Beasley was a suspect in crimes related to narcotics but had not been sentenced when he escaped from Kerobokan prison in Bali last week. The 32-year-old is believed to have cut through bars in the ceiling of his cell before scaling a perimeter wall of the prison in an area being refurbished. The Kerobokan prison, about 10 km (six miles) from the main tourist beaches in the Kuta area, often holds foreigners facing drug-related charges. Representatives of Beasley could not immediately be reached for comment. In June, an Australian, a Bulgarian, an Indian and a Malaysian tunneled to freedom about 12 meters (13 yards) under Kerobokan prison s walls. The Indian and the Bulgarian were caught soon after in neighboring East Timor, but Australian Shaun Edward Davidson and Malaysian Tee Kok King remain at large. Davidson has taunted authorities by saying he was enjoying life in various parts of the world, in purported posts on Facebook. Kerobokan has housed a number of well-known foreign drug convicts, including Australian Schappelle Corby, whose 12-1/2-year sentence for marijuana smuggling got huge media attention."
```
### Result
```python
result = clf(text)
result
```
output:[{'label': 'LABEL_1', 'score': 0.9994995594024658}] | 8728088caf478817ffda650aae7b9058 |
google/t5-efficient-tiny-nl24 | google | t5 | 12 | 7 | transformers | 0 | text2text-generation | true | true | true | apache-2.0 | ['en'] | ['c4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-narrow'] | false | true | true | 6,251 | false |
# T5-Efficient-TINY-NL24 (Deep-Narrow version)
T5-Efficient-TINY-NL24 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the modelโs depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-tiny-nl24** - is of model type **Tiny** with the following variations:
- **nl** is **24**
It has **52.35** million parameters and thus requires *ca.* **209.41 MB** of memory in full precision (*fp32*)
or **104.71 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 62cf5684fb6cdd59e1e9e27854eeb37c |
xander71988/t5-small-finetuned-facet-contract-type-test | xander71988 | t5 | 8 | 2 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,360 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xander71988/t5-small-finetuned-facet-contract-type-test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3136
- Validation Loss: 0.3881
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 3496, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3136 | 0.3881 | 0 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.5.0
- Datasets 2.3.2
- Tokenizers 0.13.2
| e3ca5ed20b98a162a620332a9892f5ae |
richielo/small-e-czech-finetuned-ner-wikiann | richielo | electra | 14 | 99 | transformers | 1 | token-classification | true | false | false | cc-by-4.0 | null | ['wikiann'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,131 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-e-czech-finetuned-ner-wikiann
This model is a fine-tuned version of [Seznam/small-e-czech](https://huggingface.co/Seznam/small-e-czech) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2547
- Precision: 0.8713
- Recall: 0.8970
- F1: 0.8840
- Accuracy: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2924 | 1.0 | 2500 | 0.2449 | 0.7686 | 0.8088 | 0.7882 | 0.9320 |
| 0.2042 | 2.0 | 5000 | 0.2137 | 0.8050 | 0.8398 | 0.8220 | 0.9400 |
| 0.1699 | 3.0 | 7500 | 0.1912 | 0.8236 | 0.8593 | 0.8411 | 0.9466 |
| 0.1419 | 4.0 | 10000 | 0.1931 | 0.8349 | 0.8671 | 0.8507 | 0.9488 |
| 0.1316 | 5.0 | 12500 | 0.1892 | 0.8470 | 0.8776 | 0.8620 | 0.9519 |
| 0.1042 | 6.0 | 15000 | 0.2058 | 0.8433 | 0.8811 | 0.8618 | 0.9508 |
| 0.0884 | 7.0 | 17500 | 0.2020 | 0.8602 | 0.8849 | 0.8724 | 0.9531 |
| 0.0902 | 8.0 | 20000 | 0.2118 | 0.8551 | 0.8837 | 0.8692 | 0.9528 |
| 0.0669 | 9.0 | 22500 | 0.2171 | 0.8634 | 0.8906 | 0.8768 | 0.9550 |
| 0.0529 | 10.0 | 25000 | 0.2228 | 0.8638 | 0.8912 | 0.8773 | 0.9545 |
| 0.0613 | 11.0 | 27500 | 0.2293 | 0.8626 | 0.8898 | 0.8760 | 0.9544 |
| 0.0549 | 12.0 | 30000 | 0.2276 | 0.8694 | 0.8958 | 0.8824 | 0.9554 |
| 0.0516 | 13.0 | 32500 | 0.2384 | 0.8717 | 0.8940 | 0.8827 | 0.9552 |
| 0.0412 | 14.0 | 35000 | 0.2443 | 0.8701 | 0.8931 | 0.8815 | 0.9554 |
| 0.0345 | 15.0 | 37500 | 0.2464 | 0.8723 | 0.8958 | 0.8839 | 0.9557 |
| 0.0412 | 16.0 | 40000 | 0.2477 | 0.8705 | 0.8948 | 0.8825 | 0.9552 |
| 0.0363 | 17.0 | 42500 | 0.2525 | 0.8742 | 0.8973 | 0.8856 | 0.9559 |
| 0.0341 | 18.0 | 45000 | 0.2529 | 0.8727 | 0.8962 | 0.8843 | 0.9561 |
| 0.0194 | 19.0 | 47500 | 0.2533 | 0.8699 | 0.8966 | 0.8830 | 0.9557 |
| 0.0247 | 20.0 | 50000 | 0.2547 | 0.8713 | 0.8970 | 0.8840 | 0.9557 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| d971a085071f1f4f6bf8e95f244d43ae |
marcus2000/ru_t5_model_forlegaltext_rouge | marcus2000 | mt5 | 18 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,337 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual_t5_model_for_law_simplification
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.2857
- Rouge2: 0.0
- Rougel: 0.2857
- Rougelsum: 0.2857
- Gen Len: 7.9033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 157 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| No log | 2.0 | 314 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| No log | 3.0 | 471 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 4.0 | 628 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 5.0 | 785 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 6.0 | 942 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 7.0 | 1099 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 8.0 | 1256 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 9.0 | 1413 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
| 0.0 | 10.0 | 1570 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| ac428f33e250ce23486d9a6d1a6661a1 |
PlanTL-GOB-ES/roberta-large-bne-te | PlanTL-GOB-ES | roberta | 9 | 214 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['es'] | ['xnli'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['national library of spain', 'spanish', 'bne', 'xnli', 'textual entailment'] | true | true | true | 7,219 | false |
# Spanish RoBERTa-large trained on BNE finetuned for the Spanish Cross-lingual Natural Language Inference (XNLI) dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-large-bne-te** is a Textual Entailment (TE) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de Espaรฑa)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
**roberta-large-bne-te** model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="PlanTL-GOB-ES/roberta-large-bne-te")
example = "Mi cumpleaรฑos es el 27 de mayo. Cumplirรฉ aรฑos a finales de mayo."
te_results = nlp(example)
pprint(te_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
We used the TE dataset in Spanish called [XNLI dataset](https://huggingface.co/datasets/xnli) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing accuracy.
## Evaluation results
We evaluated the *roberta-large-bne-te* on the XNLI test set against standard multilingual and monolingual baselines:
| Model | XNLI (Accuracy) |
| ------------|:----|
| roberta-large-bne | **82.63** |
| roberta-base-bne | 80.16 |
| BETO | 81.30 |
| mBERT | 78.76 |
| BERTIN | 78.90 |
| ELECTRA | 78.78 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to <plantl-gob-es@bsc.es>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
## Citing information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiรฉrrez Fandiรฑo and Jordi Armengol Estapรฉ and Marc Pร mies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informร tica),รrees temร tiques de la UPC::Informร tica::Intelยทligรจncia artificial::Llenguatge natural},
publisher = {Sociedad Espaรฑola para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA โ State Secretariat for digitalization and artificial intelligence) nor the creator (BSC โ Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y estรกn a disposiciรณn de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningรบn caso el propietario de los modelos (SEDIA โ Secretarรญa de Estado de Digitalizaciรณn e Inteligencia Artificial) ni el creador (BSC โ Barcelona Supercomputing Center) serรกn responsables de los resultados derivados del uso que hagan terceros de estos modelos.
| 6fd388ec0ace553cc491f3f5044f3a1b |
vasudevgupta/bigbird-roberta-natural-questions | vasudevgupta | big_bird | 11 | 7,949 | transformers | 7 | question-answering | true | false | false | apache-2.0 | ['en'] | ['natural_questions'] | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | [] | false | true | true | 842 | false |
This checkpoint is obtained after training `BigBirdForQuestionAnswering` (with extra pooler head) on [`natural_questions`](https://huggingface.co/datasets/natural_questions) dataset for ~ 2 weeks on 2 K80 GPUs. Script for training can be found here: https://github.com/vasudevgupta7/bigbird
| Exact Match | 47.44 |
|-------------|-------|
**Use this model just like any other model from ๐คTransformers**
```python
from transformers import BigBirdForQuestionAnswering
model_id = "vasudevgupta/bigbird-roberta-natural-questions"
model = BigBirdForQuestionAnswering.from_pretrained(model_id)
tokenizer = BigBirdTokenizer.from_pretrained(model_id)
```
In case you are interested in predicting category (null, long, short, yes, no) as well, use `BigBirdForNaturalQuestions` (instead of `BigBirdForQuestionAnswering`) from my training script.
| a0dedcd155df00db41b4985036b226c7 |
gchhablani/bert-base-cased-finetuned-wnli | gchhablani | bert | 71 | 29 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'fnet-bert-base-comparison'] | true | true | true | 2,334 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.4648
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7299 | 1.0 | 40 | 0.6923 | 0.5634 |
| 0.6982 | 2.0 | 80 | 0.7027 | 0.3803 |
| 0.6972 | 3.0 | 120 | 0.7005 | 0.4507 |
| 0.6992 | 4.0 | 160 | 0.6977 | 0.5352 |
| 0.699 | 5.0 | 200 | 0.6996 | 0.4648 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| 7ef85cce6cf25c73bfa97361fdb36306 |
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s632 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 495 | false | # exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s632
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 9734ba8789b5850273a8cd6165e889bd |
muhtasham/tiny-mlm-imdb-target-tweet | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,898 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5550
- Accuracy: 0.6925
- F1: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.159 | 4.9 | 500 | 0.9977 | 0.6364 | 0.6013 |
| 0.7514 | 9.8 | 1000 | 0.8549 | 0.7112 | 0.7026 |
| 0.5011 | 14.71 | 1500 | 0.8516 | 0.7032 | 0.6962 |
| 0.34 | 19.61 | 2000 | 0.9019 | 0.7059 | 0.7030 |
| 0.2258 | 24.51 | 2500 | 0.9722 | 0.7166 | 0.7164 |
| 0.1607 | 29.41 | 3000 | 1.0724 | 0.6979 | 0.6999 |
| 0.1127 | 34.31 | 3500 | 1.1435 | 0.7193 | 0.7169 |
| 0.0791 | 39.22 | 4000 | 1.2807 | 0.7059 | 0.7069 |
| 0.0568 | 44.12 | 4500 | 1.3849 | 0.7139 | 0.7159 |
| 0.0478 | 49.02 | 5000 | 1.5550 | 0.6925 | 0.7004 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
| 80ebfc551cc686044bdc108be4e21182 |
gokuls/mobilebert_add_GLUE_Experiment_wnli | gokuls | mobilebert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,580 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_wnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6896
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6956 | 1.0 | 5 | 0.6896 | 0.5634 |
| 0.6945 | 2.0 | 10 | 0.6950 | 0.4366 |
| 0.6938 | 3.0 | 15 | 0.6950 | 0.4366 |
| 0.693 | 4.0 | 20 | 0.6914 | 0.5634 |
| 0.6931 | 5.0 | 25 | 0.6897 | 0.5634 |
| 0.6932 | 6.0 | 30 | 0.6900 | 0.5634 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| 20576a22a0d98d02474b055b1857c344 |
SongRb/distilbert-base-uncased-finetuned-cola | SongRb | distilbert | 19 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | false | true | true | 1,570 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8549
- Matthews Correlation: 0.5332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5213 | 1.0 | 535 | 0.5163 | 0.4183 |
| 0.3479 | 2.0 | 1070 | 0.5351 | 0.5182 |
| 0.231 | 3.0 | 1605 | 0.6271 | 0.5291 |
| 0.166 | 4.0 | 2140 | 0.7531 | 0.5279 |
| 0.1313 | 5.0 | 2675 | 0.8549 | 0.5332 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
| 2957bb5ad02601e489d458c11d261be2 |
benjamin/gpt2-wechsel-german | benjamin | gpt2 | 21 | 65 | transformers | 2 | text-generation | true | false | false | mit | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,644 | false |
# gpt2-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
| e27eafd666e624995df680d14d933b79 |
pig4431/CR_BERT_5E | pig4431 | bert | 10 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,046 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CR_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5094
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.694 | 0.33 | 50 | 0.5894 | 0.6733 |
| 0.5335 | 0.66 | 100 | 0.4150 | 0.84 |
| 0.3446 | 0.99 | 150 | 0.3052 | 0.9 |
| 0.241 | 1.32 | 200 | 0.3409 | 0.8733 |
| 0.2536 | 1.66 | 250 | 0.3101 | 0.88 |
| 0.2318 | 1.99 | 300 | 0.3015 | 0.8867 |
| 0.1527 | 2.32 | 350 | 0.3806 | 0.8733 |
| 0.1026 | 2.65 | 400 | 0.3788 | 0.8733 |
| 0.1675 | 2.98 | 450 | 0.3956 | 0.8933 |
| 0.0699 | 3.31 | 500 | 0.4532 | 0.8867 |
| 0.0848 | 3.64 | 550 | 0.4636 | 0.88 |
| 0.0991 | 3.97 | 600 | 0.4951 | 0.88 |
| 0.0578 | 4.3 | 650 | 0.5073 | 0.88 |
| 0.0636 | 4.64 | 700 | 0.5090 | 0.8733 |
| 0.0531 | 4.97 | 750 | 0.5094 | 0.8733 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
| 0dbe89963aac5ddaa7fa89ddf5e30788 |
Geotrend/bert-base-en-tr-cased | Geotrend | bert | 8 | 2 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['multilingual'] | ['wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,292 | false |
# bert-base-en-tr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-tr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-tr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grรฉgoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
| 475f8ef5299475b02f764ec47a86b484 |
Davlan/xlm-roberta-base-finetuned-arabic | Davlan | xlm-roberta | 10 | 6 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,015 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar_xlmr-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.7.1+cu110
- Datasets 1.16.1
- Tokenizers 0.12.1
| de4e89b3d578b62a4954cd80bc1be5ee |
ttj/flex-diffusion-2-1 | ttj | null | 28 | 42 | diffusers | 15 | text-to-image | false | false | false | openrail++ | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 11,348 | false |
# Model Card for flex-diffusion-2-1
<!-- Provide a quick summary of what the model is/does. [Optional] -->
stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned with different aspect ratios.
## TLDR:
### There are 2 models in this repo:
- One based on stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned for 6k steps.
- One based on stable-diffusion-2-base (stabilityai/stable-diffusion-2-base) finetuned for 6k steps, on the same dataset.
For usage, see - [How to Get Started with the Model](#how-to-get-started-with-the-model)
### It aims to solve the following issues:
1. Generated images looks like they are cropped from a larger image.
2. Generating non-square images creates weird results, due to the model being trained on square images.
Examples:
| resolution | model | stable diffusion | flex diffusion |
|:---------------:|:-------:|:----------------------------:|:-----------------------------:|
| 576x1024 (9:16) | v2-1 | ![img](imgs/21-576-1024.png) | ![img](imgs/21f-576-1024.png) |
| 576x1024 (9:16) | v2-base | ![img](imgs/2b-576-1024.png) | ![img](imgs/2bf-576-1024.png) |
| 1024x576 (16:9) | v2-1 | ![img](imgs/21-1024-576.png) | ![img](imgs/21f-1024-576.png) |
| 1024x576 (16:9) | v2-base | ![img](imgs/2b-1024-576.png) | ![img](imgs/2bf-1024-576.png) |
### Limitations:
1. It's trained on a small dataset, so it's improvements may be limited.
2. For each aspect ratio, it's trained on only a fixed resolution. So it may not be able to generate images of different resolutions.
For 1:1 aspect ratio, it's fine-tuned at 512x512, although flex-diffusion-2-1 was last finetuned at 768x768.
### Potential improvements:
1. Train on a larger dataset.
2. Train on different resolutions even for the same aspect ratio.
3. Train on specific aspect ratios, instead of a range of aspect ratios.
# Table of Contents
- [Model Card for flex-diffusion-2-1](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Glossary [optional]](#glossary-optional)
- [More Information [optional]](#more-information-optional)
- [Model Card Authors [optional]](#model-card-authors-optional)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
stable-diffusion-2-1 (stabilityai/stable-diffusion-2-1) finetuned for dynamic aspect ratios.
finetuned resolutions:
| | width | height | aspect ratio |
|---:|--------:|---------:|:---------------|
| 0 | 512 | 1024 | 1:2 |
| 1 | 576 | 1024 | 9:16 |
| 2 | 576 | 960 | 3:5 |
| 3 | 640 | 1024 | 5:8 |
| 4 | 512 | 768 | 2:3 |
| 5 | 640 | 896 | 5:7 |
| 6 | 576 | 768 | 3:4 |
| 7 | 512 | 640 | 4:5 |
| 8 | 640 | 768 | 5:6 |
| 9 | 640 | 704 | 10:11 |
| 10 | 512 | 512 | 1:1 |
| 11 | 704 | 640 | 11:10 |
| 12 | 768 | 640 | 6:5 |
| 13 | 640 | 512 | 5:4 |
| 14 | 768 | 576 | 4:3 |
| 15 | 896 | 640 | 7:5 |
| 16 | 768 | 512 | 3:2 |
| 17 | 1024 | 640 | 8:5 |
| 18 | 960 | 576 | 5:3 |
| 19 | 1024 | 576 | 16:9 |
| 20 | 1024 | 512 | 2:1 |
- **Developed by:** Jonathan Chang
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s)**: English
- **License:** creativeml-openrail-m
- **Parent Model:** https://huggingface.co/stabilityai/stable-diffusion-2-1
- **Resources for more information:** More information needed
# Uses
- see https://huggingface.co/stabilityai/stable-diffusion-2-1
# Training Details
## Training Data
- LAION aesthetic dataset, subset of it with 6+ rating
- https://laion.ai/blog/laion-aesthetics/
- https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus
- I only used a small portion of that, see [Preprocessing](#preprocessing)
- most common aspect ratios in the dataset (before preprocessing)
| | aspect_ratio | counts |
|---:|:---------------|---------:|
| 0 | 1:1 | 154727 |
| 1 | 3:2 | 119615 |
| 2 | 2:3 | 61197 |
| 3 | 4:3 | 52276 |
| 4 | 16:9 | 38862 |
| 5 | 400:267 | 21893 |
| 6 | 3:4 | 16893 |
| 7 | 8:5 | 16258 |
| 8 | 4:5 | 15684 |
| 9 | 6:5 | 12228 |
| 10 | 1000:667 | 12097 |
| 11 | 2:1 | 11006 |
| 12 | 800:533 | 10259 |
| 13 | 5:4 | 9753 |
| 14 | 500:333 | 9700 |
| 15 | 250:167 | 9114 |
| 16 | 5:3 | 8460 |
| 17 | 200:133 | 7832 |
| 18 | 1024:683 | 7176 |
| 19 | 11:10 | 6470 |
- predefined aspect ratios
| | width | height | aspect ratio |
|---:|--------:|---------:|:---------------|
| 0 | 512 | 1024 | 1:2 |
| 1 | 576 | 1024 | 9:16 |
| 2 | 576 | 960 | 3:5 |
| 3 | 640 | 1024 | 5:8 |
| 4 | 512 | 768 | 2:3 |
| 5 | 640 | 896 | 5:7 |
| 6 | 576 | 768 | 3:4 |
| 7 | 512 | 640 | 4:5 |
| 8 | 640 | 768 | 5:6 |
| 9 | 640 | 704 | 10:11 |
| 10 | 512 | 512 | 1:1 |
| 11 | 704 | 640 | 11:10 |
| 12 | 768 | 640 | 6:5 |
| 13 | 640 | 512 | 5:4 |
| 14 | 768 | 576 | 4:3 |
| 15 | 896 | 640 | 7:5 |
| 16 | 768 | 512 | 3:2 |
| 17 | 1024 | 640 | 8:5 |
| 18 | 960 | 576 | 5:3 |
| 19 | 1024 | 576 | 16:9 |
| 20 | 1024 | 512 | 2:1 |
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
1. download files with url & caption from https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus
- I only used the first file `train-00000-of-00007-29aec9150af50f9f.parquet`
2. use img2dataset to convert to webdataset
- https://github.com/rom1504/img2dataset
- I put train-00000-of-00007-29aec9150af50f9f.parquet in a folder called `first-file`
- the output folder is `/mnt/aesthetics6plus`, change this to your own folder
```bash
echo INPUT_FOLDER=first-file
echo OUTPUT_FOLDER=/mnt/aesthetics6plus
img2dataset --url_list $INPUT_FOLDER --input_format "parquet"\
--url_col "URL" --caption_col "TEXT" --output_format webdataset\
--output_folder $OUTPUT_FOLDER --processes_count 3 --thread_count 6 --image_size 1024 --resize_only_if_bigger --resize_mode=keep_ratio_largest \
--save_additional_columns '["WIDTH","HEIGHT","punsafe","similarity"]' --enable_wandb True
```
3. The data-loading code will do preprocessing on the fly, so no need to do anything else. But it's not optimized for speed, the GPU utilization fluctuates between 80% and 100%. And it's not written for multi-GPU training, so use it with caution. The code will do the following:
- use webdataset to load the data
- calculate the aspect ratio of each image
- find the closest aspect ratio & it's associated resolution from the predefined resolutions: `argmin(abs(aspect_ratio - predefined_aspect_ratios))`. E.g. if the aspect ratio is 1:3, the closest resolution is 1:2. and it's associated resolution is 512x1024.
- keeping the aspect ratio, resize the image such that it's larger or equal to the associated resolution on each side. E.g. resize to 512x(512*3) = 512x1536
- random crop the image to the associated resolution. E.g. crop to 512x1024
- if more than 10% of the image is lost in the cropping, discard this example.
- batch examples by aspect ratio, so all examples in a batch have the same aspect ratio
### Speeds, Sizes, Times
- Dataset size: 100k image-caption pairs, before filtering.
- I didn't wait for the whole dataset to be downloaded, I copied the first 10 tar files and their index files to a new folder called `aesthetics6plus-small`, with 100k image-caption pairs in total. The full dataset is a lot bigger.
- Hardware: 1 RTX3090 GPUs
- Optimizer: 8bit Adam
- Batch size: 32
- actual batch size: 2
- gradient_accumulation_steps: 16
- effective batch size: 32
- Learning rate: warmup to 2e-6 for 500 steps and then kept constant
- Learning rate: 2e-6
- Training steps: 6k
- Epoch size (approximate): 32 * 6k / 100k = 1.92 (not accounting for the filtering)
- Each example is seen 1.92 times on average.
- Training time: approximately 1 day
## Results
More information needed
# Model Card Authors
Jonathan Chang
# How to Get Started with the Model
Use the code below to get started with the model.
```python
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler, UNet2DConditionModel
def use_DPM_solver(pipe):
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
return pipe
pipe = StableDiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
unet = UNet2DConditionModel.from_pretrained("ttj/flex-diffusion-2-1", subfolder="2-1/unet", torch_dtype=torch.float16),
torch_dtype=torch.float16,
)
# for v2-base, use the following line instead
#pipe = StableDiffusionPipeline.from_pretrained(
# "stabilityai/stable-diffusion-2-base",
# unet = UNet2DConditionModel.from_pretrained("ttj/flex-diffusion-2-1", subfolder="2-base/unet", torch_dtype=torch.float16),
# torch_dtype=torch.float16)
pipe = use_DPM_solver(pipe).to("cuda")
pipe = pipe.to("cuda")
prompt = "a professional photograph of an astronaut riding a horse"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
``` | 1b92f954f5be96fdc54cb3322ac7d8f4 |
jhaochenz/finetuned_distilgpt2_sst2_negation0.01_pretrainedTrue_epochs1 | jhaochenz | gpt2 | 14 | 0 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,163 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_distilgpt2_sst2_negation0.01_pretrainedTrue_epochs1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7102 | 1.0 | 1323 | 3.2761 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
| 5aa6b3f491cd8fb68ccdec53b5d3dee0 |
Helsinki-NLP/opus-mt-kab-en | Helsinki-NLP | marian | 10 | 11 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 778 | false |
### opus-mt-kab-en
* source languages: kab
* target languages: en
* OPUS readme: [kab-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kab-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.kab.en | 27.5 | 0.408 |
| e3d7cbad96a54b1921b3651e43340788 |
AykeeSalazar/violation-classification-bantai-vit-withES | AykeeSalazar | vit | 9 | 9 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['image_folder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,274 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# violation-classification-bantai-vit-withES
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2234
- eval_accuracy: 0.9592
- eval_runtime: 64.9173
- eval_samples_per_second: 85.37
- eval_steps_per_second: 2.68
- epoch: 227.72
- step: 23000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 500
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 01256acfdf2f94c35f9a0e736daf6d9e |
Helsinki-NLP/opus-mt-zne-fi | Helsinki-NLP | marian | 10 | 9 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-zne-fi
* source languages: zne
* target languages: fi
* OPUS readme: [zne-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fi | 22.8 | 0.432 |
| 4d1dacd86a268f29cf5c6016cd99d598 |
dminiotas05/distilbert-base-uncased-finetuned-ft750_reg3 | dminiotas05 | distilbert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,355 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft750_reg3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6143
- Mse: 0.6143
- Mae: 0.6022
- R2: 0.4218
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.5241 | 1.0 | 188 | 0.6143 | 0.6143 | 0.6022 | 0.4218 | 0.52 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| f12074dbb299e37eb89055e19afd27a8 |
muhtasham/tiny-mlm-glue-rte-target-glue-stsb | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,027 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-rte-target-glue-stsb
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9754
- Pearson: 0.8093
- Spearmanr: 0.8107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| 3.2461 | 2.78 | 500 | 1.1464 | 0.7083 | 0.7348 |
| 1.0093 | 5.56 | 1000 | 1.1455 | 0.7664 | 0.7934 |
| 0.7582 | 8.33 | 1500 | 1.0140 | 0.7980 | 0.8136 |
| 0.6329 | 11.11 | 2000 | 0.8708 | 0.8136 | 0.8184 |
| 0.5285 | 13.89 | 2500 | 0.8894 | 0.8139 | 0.8159 |
| 0.4747 | 16.67 | 3000 | 0.9908 | 0.8116 | 0.8165 |
| 0.4154 | 19.44 | 3500 | 0.9260 | 0.8137 | 0.8145 |
| 0.3792 | 22.22 | 4000 | 0.9264 | 0.8161 | 0.8156 |
| 0.3445 | 25.0 | 4500 | 0.9664 | 0.8155 | 0.8164 |
| 0.3246 | 27.78 | 5000 | 0.9735 | 0.8110 | 0.8121 |
| 0.3033 | 30.56 | 5500 | 0.9754 | 0.8093 | 0.8107 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| eadd3ff0113ff50849f82f6d70bcca75 |
sahillihas/OntoMedQA | sahillihas | bert | 12 | 4 | transformers | 0 | multiple-choice | true | false | false | apache-2.0 | null | ['medmcqa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,311 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OntoMedQA
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the medmcqa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2874
- Accuracy: 0.4118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 187 | 1.2418 | 0.2941 |
| No log | 2.0 | 374 | 1.1449 | 0.4706 |
| 0.8219 | 3.0 | 561 | 1.2874 | 0.4118 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| cdc945e58c1264a88d7bcae3f571389a |
cansen88/turkishReviews_5_topic | cansen88 | gpt2 | 9 | 2 | transformers | 0 | text-generation | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,574 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews_5_topic
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.8939
- Validation Loss: 6.8949
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 756, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.0049 | 6.8949 | 0 |
| 6.8943 | 6.8949 | 1 |
| 6.8939 | 6.8949 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
| 9ca7dc140af910939259097709520904 |
jonatasgrosman/exp_w2v2t_fa_wavlm_s527 | jonatasgrosman | wavlm | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fa'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fa'] | false | true | true | 439 | false | # exp_w2v2t_fa_wavlm_s527
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 6740a953c8824985296d13f071fc3513 |
Geotrend/bert-base-en-nl-cased | Geotrend | bert | 8 | 2 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['multilingual'] | ['wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,292 | false |
# bert-base-en-nl-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-nl-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-nl-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grรฉgoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. | f5b472db6d12b18037c97caed276d2c7 |
coreml/coreml-Inkpunk-Diffusion | coreml | null | 4 | 0 | null | 2 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['coreml', 'stable-diffusion', 'text-to-image'] | false | true | true | 1,328 | false |
# Core ML Converted Model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br>
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
`split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
`original` version is only compatible with CPU & GPU option.
# Inkpunk Diffusion
Finetuned Stable Diffusion model trained on dreambooth. Vaguely inspired by Gorillaz, FLCL, and Yoji Shinkawa. Use **_nvinkpunk_** in your prompts.
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Inkpunk-Diffusion:
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/Inkpunk-Diffusion)
# Sample images
![output Samples v2](https://huggingface.co/Envvi/Inkpunk-Diffusion/resolve/main/inkpunk-v2-samples-1.png)
![output Samples v2](https://huggingface.co/Envvi/Inkpunk-Diffusion/resolve/main/inkpunk-v2-samples-2.png) | 3f83914dd0beaba0bffba5de8b05204a |
eunyounglee/mBART_translator_kobart_2 | eunyounglee | bart | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,935 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBART_translator_kobart_2
This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0971
- Bleu: 34.7406
- Gen Len: 19.863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7514 | 1.0 | 1912 | 0.3009 | 33.1304 | 19.8465 |
| 0.4925 | 2.0 | 3824 | 0.2213 | 33.7817 | 19.8541 |
| 0.3893 | 3.0 | 5736 | 0.1789 | 34.1249 | 19.8577 |
| 0.3328 | 4.0 | 7648 | 0.1555 | 34.3074 | 19.8598 |
| 0.2896 | 5.0 | 9560 | 0.1375 | 34.4127 | 19.8585 |
| 0.2614 | 6.0 | 11472 | 0.1240 | 34.5061 | 19.8604 |
| 0.2292 | 7.0 | 13384 | 0.1116 | 34.6476 | 19.8632 |
| 0.2098 | 8.0 | 15296 | 0.1050 | 34.6956 | 19.863 |
| 0.1967 | 9.0 | 17208 | 0.0995 | 34.7304 | 19.863 |
| 0.1812 | 10.0 | 19120 | 0.0971 | 34.7406 | 19.863 |
### Framework versions
- Transformers 4.23.0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
| 7c2646968f58eb500d5be1b66920d93b |
shpotes/xls-r-et | shpotes | wav2vec2 | 96 | 11 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['et'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard'] | true | true | true | 1,795 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ET dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4835
- Wer: 0.3475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3825 | 12.5 | 500 | 0.4022 | 0.5059 |
| 0.1592 | 25.0 | 1000 | 0.4585 | 0.4456 |
| 0.1215 | 37.5 | 1500 | 0.4550 | 0.4164 |
| 0.0972 | 50.0 | 2000 | 0.4725 | 0.4088 |
| 0.0731 | 62.5 | 2500 | 0.4568 | 0.3824 |
| 0.0527 | 75.0 | 3000 | 0.4712 | 0.3653 |
| 0.0428 | 87.5 | 3500 | 0.4813 | 0.3520 |
| 0.0383 | 100.0 | 4000 | 0.4835 | 0.3475 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
| d96aeb1250f62fe5055ca34fdf49605d |
Geotrend/bert-base-ja-cased | Geotrend | bert | 8 | 8 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['ja'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,283 | false |
# bert-base-ja-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ja-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grรฉgoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request. | e19827a5a70fabf53a113f8f4b07aa52 |
hieuit7/wav2vec2-common_voice-vi-demo | hieuit7 | wav2vec2 | 26 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['vi'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer'] | true | true | true | 1,361 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-vi-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - VI dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4768
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 7.67 | 100 | 5.9657 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.13.1
| 94a33dc121fd825cdaf822d8331e43e3 |
pere/whisper-medium-NST-uf-linlr | pere | whisper | 12 | 2 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['hf-asr-leaderboard', 'automatic-speech-recognition', 'NbAiLab/NST', 'generated_from_trainer'] | true | true | true | 2,489 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-NST-uf-linlr
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the NBAILAB/NST - NO-CLOSE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3007
- Wer: 9.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 72
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.2046 | 0.05 | 1000 | 0.3426 | 15.2794 |
| 0.148 | 0.1 | 2000 | 0.3284 | 10.8324 |
| 0.121 | 0.15 | 3000 | 0.3092 | 12.8848 |
| 0.1089 | 0.2 | 4000 | 0.2808 | 10.4903 |
| 0.0976 | 0.25 | 5000 | 0.2617 | 9.9202 |
| 0.0901 | 0.3 | 6000 | 0.2604 | 21.8928 |
| 0.0834 | 0.35 | 7000 | 0.2877 | 9.3501 |
| 0.0825 | 0.4 | 8000 | 0.2794 | 9.3501 |
| 0.0553 | 1.05 | 9000 | 0.2845 | 9.5781 |
| 0.0472 | 1.1 | 10000 | 0.2814 | 24.1733 |
| 0.0409 | 1.15 | 11000 | 0.3084 | 8.0958 |
| 0.041 | 1.2 | 12000 | 0.2865 | 9.2360 |
| 0.0353 | 1.25 | 13000 | 0.2828 | 6.4994 |
| 0.0348 | 1.3 | 14000 | 0.2708 | 7.5257 |
| 0.0349 | 1.35 | 15000 | 0.2842 | 23.0331 |
| 0.0361 | 1.4 | 16000 | 0.2769 | 10.1482 |
| 0.0249 | 2.04 | 17000 | 0.2935 | 8.8940 |
| 0.0204 | 2.09 | 18000 | 0.2874 | 12.4287 |
| 0.0175 | 2.14 | 19000 | 0.2882 | 12.9989 |
| 0.0197 | 2.19 | 20000 | 0.3007 | 9.1220 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
| 184d36b1bb6d062a5c937ca98ab8e345 |
research-backup/t5-base-squad-qg-default | research-backup | t5 | 34 | 4 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qg_squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question generation'] | true | true | true | 4,035 | false |
# Model Card of `research-backup/t5-base-squad-qg-default`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without parameter search (default configuration is taken from [ERNIE-GEN](https://arxiv.org/abs/2001.11314)).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-base-squad-qg-default")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-base-squad-qg-default")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-squad-qg-default/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 57.68 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 41.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 32.17 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 25.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 26.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 64.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 52.75 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-base
- max_length: 512
- max_length_output: 32
- epoch: 10
- batch: 32
- lr: 1.25e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.1
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-squad-qg-default/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| efc9699b27493d726d5840829969b91d |
Helsinki-NLP/opus-mt-pon-fr | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-pon-fr
* source languages: pon
* target languages: fr
* OPUS readme: [pon-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.fr | 24.4 | 0.410 |
| caaa3f38540ad20f5710e5be4a84469e |
amyeroberts/my_food_classifier | amyeroberts | vit | 5 | 2 | transformers | 0 | image-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,464 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amyeroberts/my_food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5833
- Validation Loss: 4.5438
- Train Accuracy: 0.125
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 4.5833 | 4.5438 | 0.125 | 0 |
### Framework versions
- Transformers 4.26.0.dev0
- TensorFlow 2.10.0
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 2752c5bc835c1687102b940a0014b441 |
Wizounovziki/t5-small-finetuned-xsum | Wizounovziki | t5 | 12 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,252 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 13 | 2.9185 | 20.6059 | 0.7473 | 20.5288 | 20.5999 | 18.87 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| cabcc170f961fe1af890fe6662b0469c |
Okyx/NERTESTINGCAROLINE1 | Okyx | bert | 8 | 15 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,501 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NERTESTINGCAROLINE1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0022
- Validation Loss: 0.0049
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10395, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0774 | 0.0125 | 0 |
| 0.0097 | 0.0078 | 1 |
| 0.0055 | 0.0058 | 2 |
| 0.0033 | 0.0056 | 3 |
| 0.0022 | 0.0049 | 4 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| d7c3981d68e9204230f2976e122f9f67 |
stevems1/distilroberta-base-SmithsModel2 | stevems1 | roberta | 9 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,265 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-SmithsModel2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8736 | 1.0 | 3632 | 1.6643 |
| 1.5808 | 2.0 | 7264 | 1.4663 |
| 1.498 | 3.0 | 10896 | 1.4090 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 5016c3a99c4e02a093d671a59768d05a |
smangrul/xls-r-mr-model | smangrul | null | 25 | 2 | null | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['mr'] | ['mozilla-foundation/common_voice_8_0', 'openslr', 'shivam/marathi_samanantar_processed', 'shivam/marathi_pib_processed', 'opus100', 'tatoeba', 'tapaco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'openslr', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 3,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR and OPENSLR - SLR64 - MR datasets.
It achieves the following results on the evaluation set:
- Loss: 0.494580
- Wer: 0.401524
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM |
|---|---|
| 40.513437625350984 | 31.04693140794224 |
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|---|---|---|---|
| 400 | 3.794000 | 3.532227 | 1.000000 |
| 800 | 3.362400 | 3.359044 | 1.000000 |
| 1200 | 2.293900 | 1.011279 | 0.829924 |
| 1600 | 1.233000 | 0.502743 | 0.593662 |
| 2000 | 0.962600 | 0.412519 | 0.496992 |
| 2400 | 0.831800 | 0.402903 | 0.493783 |
| 2800 | 0.737000 | 0.389773 | 0.469314 |
| 3200 | 0.677100 | 0.373987 | 0.436021 |
| 3600 | 0.634400 | 0.383823 | 0.432010 |
| 4000 | 0.586000 | 0.375610 | 0.419575 |
| 4400 | 0.561000 | 0.387891 | 0.418371 |
| 4800 | 0.518500 | 0.386357 | 0.417569 |
| 5200 | 0.515300 | 0.415069 | 0.430004 |
| 5600 | 0.478100 | 0.399211 | 0.408744 |
| 6000 | 0.468100 | 0.424542 | 0.402327 |
| 6400 | 0.439400 | 0.430979 | 0.410750 |
| 6800 | 0.429600 | 0.427700 | 0.409146 |
| 7200 | 0.400300 | 0.451111 | 0.419976 |
| 7600 | 0.395100 | 0.463446 | 0.405134 |
| 8000 | 0.381800 | 0.454752 | 0.407942 |
| 8400 | 0.371500 | 0.461547 | 0.404733 |
| 8800 | 0.362500 | 0.461543 | 0.411151 |
| 9200 | 0.338200 | 0.468299 | 0.417168 |
| 9600 | 0.338800 | 0.480989 | 0.412355 |
| 10000 | 0.317600 | 0.475700 | 0.410750 |
| 10400 | 0.315100 | 0.478920 | 0.403530 |
| 10800 | 0.296200 | 0.480600 | 0.398315 |
| 11200 | 0.299000 | 0.477083 | 0.393502 |
| 11600 | 0.290000 | 0.465646 | 0.393903 |
| 12000 | 0.290900 | 0.490041 | 0.405937 |
| 12400 | 0.275600 | 0.489354 | 0.399519 |
| 12800 | 0.272600 | 0.494580 | 0.395909 |
| 13200 | 0.265900 | 0.497918 | 0.397112 |
| 13600 | 0.266300 | 0.498627 | 0.397513 |
| 14000 | 0.259600 | 0.504610 | 0.401524 |
#### Evaluation Commands
To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id smangrul/xls-r-mr-model --dataset mozilla-foundation/common_voice_8_0 --config mr --split test
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
| 0c1c8f1acc7cfdeea66f6b16f64a449d |
selen/distilbert-base-uncased-finetuned-cola | selen | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 926 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 247934f63edc54db7339b9ed6cca782d |
jonatasgrosman/wav2vec2-large-xlsr-53-german | jonatasgrosman | wav2vec2 | 24 | 2,913 | transformers | 6 | automatic-speech-recognition | true | false | true | apache-2.0 | ['de'] | ['common_voice', 'mozilla-foundation/common_voice_6_0'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['audio', 'automatic-speech-recognition', 'de', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 4,468 | false |
# Fine-tuned XLSR-53 large model for speech recognition in German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-german")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "de"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-german"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS. | ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS |
| ES KOMMT ZUM SHOWDOWN IN GSTAAD. | ES KOMMT ZUG STUNDEDAUTENESTERKT |
| IHRE FOTOSTRECKEN ERSCHIENEN IN MODEMAGAZINEN WIE DER VOGUE, HARPERโS BAZAAR UND MARIE CLAIRE. | IHRE FOTELSTRECKEN ERSCHIENEN MIT MODEMAGAZINEN WIE DER VALG AT DAS BASIN MA RIQUAIR |
| FELIPE HAT EINE AUCH FรR MONARCHEN UNGEWรHNLICH LANGE TITELLISTE. | FELIPPE HAT EINE AUCH FรR MONACHEN UNGEWรHNLICH LANGE TITELLISTE |
| ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET. | ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET M |
| WAS SOLLS, ICH BIN BEREIT. | WAS SOLL'S ICH BIN BEREIT |
| DAS INTERNET BESTEHT AUS VIELEN COMPUTERN, DIE MITEINANDER VERBUNDEN SIND. | DAS INTERNET BESTEHT AUS VIELEN COMPUTERN DIE MITEINANDER VERBUNDEN SIND |
| DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM. | DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM |
| DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND. | DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND |
| SIE WAR DIE COUSINE VON CARL MARIA VON WEBER. | SIE WAR DIE COUSINE VON KARL-MARIA VON WEBER |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset mozilla-foundation/common_voice_6_0 --config de --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-german,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}erman},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german}},
year={2021}
}
``` | d3d8d97fba1ca050d75eaa42bdd14dad |
gagan3012/wav2vec2-xlsr-chuvash | gagan3012 | wav2vec2 | 13 | 8 | transformers | 0 | automatic-speech-recognition | true | false | true | apache-2.0 | ['cv'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 4,037 | false |
# Wav2Vec2-Large-XLSR-53-Chuvash
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chuvash using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cv", split="test")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
#### Results:
Prediction: ['ะฟัะพะตะบัะฟะฐ ะบะธะปำัำณะปะปำะฝ ัำะปำ ะผะตัะพะฟัะธััะธ ะธััะตัะผะตะปะปะต', 'ัะฒฤัะฐ รงะฐะบ ะฟะปะฐะฝะตัะฐ ะผะธะฝัำ ะฟััะฝะธ ะบะฐะปะปะตะฝะฝะฐะฝะฐ']
Reference: ['ะัะพะตะบัะฟะฐ ะบะธะปำัำณะปะปำะฝ, ัำัะปำ ะผะตัะพะฟัะธััะธ ะธัััะตัะผะตะปะปะต.', 'ระฐะบ ะฟะปะฐะฝะตัะฐ ะฟะธัฤ ะฟััะฝ ะธะบะบะตะฝ.']
## Evaluation
The model can be evaluated as follows on the Chuvash test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
!mkdir cer
!wget -O cer/cer.py https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese/raw/main/cer.py
test_dataset = load_dataset("common_voice", "cv", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-chuvash")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\โ]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\\tpred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.40 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1A7Y20c1QkSHfdOmLXPMiOEpwlTjDZ7m5?usp=sharing) | cd10fef44f70613a6075301c35cd60e8 |
TrLOX/gpt2-tdk | TrLOX | gpt2 | 13 | 6 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,013 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dgpt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
hello
hello
| 4669ed1306c54fc0d7597ae07d19c197 |
timm/efficientformer_l7.snap_dist_in1k | timm | null | 4 | 17 | timm | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 3,520 | false | # Model card for efficientformer_l7.snap_dist_in1k
A EfficientFormer image classification model. Pretrained with distillation on ImageNet-1k.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 82.2
- GMACs: 10.2
- Activations (M): 24.5
- Image size: 224 x 224
- **Original:** https://github.com/snap-research/EfficientFormer
- **Papers:**
- EfficientFormer: Vision Transformers at MobileNet Speed: https://arxiv.org/abs/2206.01191
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('efficientformer_l7.snap_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'efficientformer_l7.snap_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 |
|efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 |
|efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 |
|efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 |
|efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 |
|efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 |
|efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 |
## Citation
```bibtex
@article{li2022efficientformer,
title={EfficientFormer: Vision Transformers at MobileNet Speed},
author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Ju and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian},
journal={arXiv preprint arXiv:2206.01191},
year={2022}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
| ca6e13877ada156c44896ce39a80456c |
sanjin7/distilbert-base-uncased_za_pravo | sanjin7 | distilbert | 6 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 929 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_za_pravo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.25.1
- Pytorch 2.0.0.dev20221215+cpu
- Datasets 2.7.1
- Tokenizers 0.13.2
| 1faf1c91ce275de4216534db043d220d |
anas-awadalla/roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0 | anas-awadalla | null | 19 | 0 | null | 0 | null | false | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 987 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| c59559552b6516e4ba9350201bffc9c1 |
vladimirvincan/wav2vec2-common_voice-sl-demo | vladimirvincan | sew-d | 15 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['sl'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'common_voice', 'generated_from_trainer'] | true | true | true | 23,753 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-sl-demo
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the COMMON_VOICE - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5598
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| No log | 0.04 | 100 | 3.3430 | 1.0 |
| No log | 0.08 | 200 | 3.2866 | 1.0 |
| No log | 0.12 | 300 | 3.2937 | 1.0 |
| No log | 0.15 | 400 | 3.5030 | 1.0 |
| 3.994 | 0.19 | 500 | 3.4446 | 1.0 |
| 3.994 | 0.23 | 600 | 3.1687 | 1.0 |
| 3.994 | 0.27 | 700 | 3.3227 | 1.0 |
| 3.994 | 0.31 | 800 | 3.2814 | 1.0 |
| 3.994 | 0.35 | 900 | 3.2993 | 1.0 |
| 3.0371 | 0.39 | 1000 | 3.1485 | 1.0 |
| 3.0371 | 0.42 | 1100 | 3.1707 | 1.0 |
| 3.0371 | 0.46 | 1200 | 3.1262 | 1.0 |
| 3.0371 | 0.5 | 1300 | 3.3466 | 1.0 |
| 3.0371 | 0.54 | 1400 | 3.3169 | 1.0 |
| 3.0239 | 0.58 | 1500 | 3.2253 | 1.0 |
| 3.0239 | 0.62 | 1600 | 3.1882 | 1.0 |
| 3.0239 | 0.66 | 1700 | 3.1660 | 1.0 |
| 3.0239 | 0.69 | 1800 | 3.4718 | 1.0 |
| 3.0239 | 0.73 | 1900 | 3.2887 | 1.0 |
| 3.0146 | 0.77 | 2000 | 3.1325 | 1.0 |
| 3.0146 | 0.81 | 2100 | 3.0939 | 1.0 |
| 3.0146 | 0.85 | 2200 | 3.1273 | 1.0 |
| 3.0146 | 0.89 | 2300 | 3.1243 | 1.0 |
| 3.0146 | 0.93 | 2400 | 3.1530 | 1.0 |
| 3.0129 | 0.96 | 2500 | 3.1433 | 1.0 |
| 3.0129 | 1.0 | 2600 | 3.1435 | 1.0 |
| 3.0129 | 1.04 | 2700 | 3.3827 | 1.0 |
| 3.0129 | 1.08 | 2800 | 3.1542 | 1.0 |
| 3.0129 | 1.12 | 2900 | 3.1432 | 1.0 |
| 2.9995 | 1.16 | 3000 | 3.2516 | 1.0 |
| 2.9995 | 1.2 | 3100 | 3.3170 | 1.0 |
| 2.9995 | 1.23 | 3200 | 3.2760 | 1.0 |
| 2.9995 | 1.27 | 3300 | 3.2006 | 1.0 |
| 2.9995 | 1.31 | 3400 | 3.0922 | 1.0 |
| 3.0123 | 1.35 | 3500 | 3.1529 | 1.0 |
| 3.0123 | 1.39 | 3600 | 3.1523 | 1.0 |
| 3.0123 | 1.43 | 3700 | 3.4012 | 1.0 |
| 3.0123 | 1.46 | 3800 | 3.1944 | 1.0 |
| 3.0123 | 1.5 | 3900 | 3.1888 | 1.0 |
| 2.9908 | 1.54 | 4000 | 3.1911 | 1.0 |
| 2.9908 | 1.58 | 4100 | 3.2839 | 1.0 |
| 2.9908 | 1.62 | 4200 | 3.1940 | 1.0 |
| 2.9908 | 1.66 | 4300 | 3.2887 | 1.0 |
| 2.9908 | 1.7 | 4400 | 3.2933 | 1.0 |
| 2.9882 | 1.73 | 4500 | 3.2014 | 1.0 |
| 2.9882 | 1.77 | 4600 | 3.2336 | 1.0 |
| 2.9882 | 1.81 | 4700 | 3.2156 | 1.0 |
| 2.9882 | 1.85 | 4800 | 3.2428 | 1.0 |
| 2.9882 | 1.89 | 4900 | 3.2608 | 1.0 |
| 2.9822 | 1.93 | 5000 | 3.2016 | 1.0 |
| 2.9822 | 1.97 | 5100 | 3.1978 | 1.0 |
| 2.9822 | 2.0 | 5200 | 3.2168 | 1.0 |
| 2.9822 | 2.04 | 5300 | 3.3620 | 1.0 |
| 2.9822 | 2.08 | 5400 | 3.3829 | 1.0 |
| 2.9791 | 2.12 | 5500 | 3.2683 | 1.0 |
| 2.9791 | 2.16 | 5600 | 3.3418 | 1.0 |
| 2.9791 | 2.2 | 5700 | 3.2355 | 1.0 |
| 2.9791 | 2.24 | 5800 | 3.3201 | 1.0 |
| 2.9791 | 2.27 | 5900 | 3.2602 | 1.0 |
| 2.9935 | 2.31 | 6000 | 3.2714 | 1.0 |
| 2.9935 | 2.35 | 6100 | 3.2194 | 1.0 |
| 2.9935 | 2.39 | 6200 | 3.2859 | 1.0 |
| 2.9935 | 2.43 | 6300 | 3.3901 | 1.0 |
| 2.9935 | 2.47 | 6400 | 3.1497 | 1.0 |
| 3.4855 | 2.51 | 6500 | 3.1839 | 1.0 |
| 3.4855 | 2.54 | 6600 | 4.5712 | 1.0 |
| 3.4855 | 2.58 | 6700 | 4.5742 | 1.0 |
| 3.4855 | 2.62 | 6800 | 4.6043 | 1.0 |
| 3.4855 | 2.66 | 6900 | 4.6102 | 1.0 |
| 4.2255 | 2.7 | 7000 | 4.6669 | 1.0 |
| 4.2255 | 2.74 | 7100 | 4.8863 | 1.0 |
| 4.2255 | 2.78 | 7200 | 4.9927 | 1.0 |
| 4.2255 | 2.81 | 7300 | 4.6027 | 1.0 |
| 4.2255 | 2.85 | 7400 | 4.5869 | 1.0 |
| 4.1848 | 2.89 | 7500 | 4.5670 | 1.0 |
| 4.1848 | 2.93 | 7600 | 4.6911 | 1.0 |
| 4.1848 | 2.97 | 7700 | 4.6785 | 1.0 |
| 4.1848 | 3.01 | 7800 | 4.7985 | 1.0 |
| 4.1848 | 3.05 | 7900 | 4.6301 | 1.0 |
| 4.189 | 3.08 | 8000 | 4.5125 | 1.0 |
| 4.189 | 3.12 | 8100 | 4.5072 | 1.0 |
| 4.189 | 3.16 | 8200 | 4.5629 | 1.0 |
| 4.189 | 3.2 | 8300 | 4.6750 | 1.0 |
| 4.189 | 3.24 | 8400 | 4.6424 | 1.0 |
| 4.187 | 3.28 | 8500 | 4.5614 | 1.0 |
| 4.187 | 3.32 | 8600 | 4.5842 | 1.0 |
| 4.187 | 3.35 | 8700 | 4.5979 | 1.0 |
| 4.187 | 3.39 | 8800 | 4.6466 | 1.0 |
| 4.187 | 3.43 | 8900 | 4.6582 | 1.0 |
| 4.1585 | 3.47 | 9000 | 4.6266 | 1.0 |
| 4.1585 | 3.51 | 9100 | 4.5829 | 1.0 |
| 4.1585 | 3.55 | 9200 | 4.6936 | 1.0 |
| 4.1585 | 3.59 | 9300 | 4.5669 | 1.0 |
| 4.1585 | 3.62 | 9400 | 4.5689 | 1.0 |
| 4.1351 | 3.66 | 9500 | 4.7381 | 1.0 |
| 4.1351 | 3.7 | 9600 | 4.6701 | 1.0 |
| 4.1351 | 3.74 | 9700 | 4.6769 | 1.0 |
| 4.1351 | 3.78 | 9800 | 4.5556 | 1.0 |
| 4.1351 | 3.82 | 9900 | 4.5498 | 1.0 |
| 4.1709 | 3.86 | 10000 | 4.6919 | 1.0 |
| 4.1709 | 3.89 | 10100 | 4.6067 | 1.0 |
| 4.1709 | 3.93 | 10200 | 4.6368 | 1.0 |
| 4.1709 | 3.97 | 10300 | 4.6049 | 1.0 |
| 4.1709 | 4.01 | 10400 | 4.6033 | 1.0 |
| 4.1901 | 4.05 | 10500 | 4.6119 | 1.0 |
| 4.1901 | 4.09 | 10600 | 4.5714 | 1.0 |
| 4.1901 | 4.12 | 10700 | 4.5847 | 1.0 |
| 4.1901 | 4.16 | 10800 | 4.6093 | 1.0 |
| 4.1901 | 4.2 | 10900 | 4.5800 | 1.0 |
| 4.1907 | 4.24 | 11000 | 4.5871 | 1.0 |
| 4.1907 | 4.28 | 11100 | 4.6121 | 1.0 |
| 4.1907 | 4.32 | 11200 | 4.5385 | 1.0 |
| 4.1907 | 4.36 | 11300 | 4.6086 | 1.0 |
| 4.1907 | 4.39 | 11400 | 4.5917 | 1.0 |
| 4.1686 | 4.43 | 11500 | 4.5608 | 1.0 |
| 4.1686 | 4.47 | 11600 | 4.6417 | 1.0 |
| 4.1686 | 4.51 | 11700 | 4.6308 | 1.0 |
| 4.1686 | 4.55 | 11800 | 4.6191 | 1.0 |
| 4.1686 | 4.59 | 11900 | 4.6063 | 1.0 |
| 4.1551 | 4.63 | 12000 | 4.6244 | 1.0 |
| 4.1551 | 4.66 | 12100 | 4.6262 | 1.0 |
| 4.1551 | 4.7 | 12200 | 4.5864 | 1.0 |
| 4.1551 | 4.74 | 12300 | 4.5340 | 1.0 |
| 4.1551 | 4.78 | 12400 | 4.6080 | 1.0 |
| 4.1571 | 4.82 | 12500 | 4.5470 | 1.0 |
| 4.1571 | 4.86 | 12600 | 4.6032 | 1.0 |
| 4.1571 | 4.9 | 12700 | 4.6595 | 1.0 |
| 4.1571 | 4.93 | 12800 | 4.7098 | 1.0 |
| 4.1571 | 4.97 | 12900 | 4.6056 | 1.0 |
| 4.1652 | 5.01 | 13000 | 4.5714 | 1.0 |
| 4.1652 | 5.05 | 13100 | 4.5000 | 1.0 |
| 4.1652 | 5.09 | 13200 | 4.5730 | 1.0 |
| 4.1652 | 5.13 | 13300 | 4.5425 | 1.0 |
| 4.1652 | 5.17 | 13400 | 4.5585 | 1.0 |
| 4.1694 | 5.2 | 13500 | 4.6677 | 1.0 |
| 4.1694 | 5.24 | 13600 | 4.5399 | 1.0 |
| 4.1694 | 5.28 | 13700 | 4.6873 | 1.0 |
| 4.1694 | 5.32 | 13800 | 4.5722 | 1.0 |
| 4.1694 | 5.36 | 13900 | 4.5122 | 1.0 |
| 4.1541 | 5.4 | 14000 | 4.5384 | 1.0 |
| 4.1541 | 5.44 | 14100 | 4.6260 | 1.0 |
| 4.1541 | 5.47 | 14200 | 4.5863 | 1.0 |
| 4.1541 | 5.51 | 14300 | 4.5343 | 1.0 |
| 4.1541 | 5.55 | 14400 | 4.5562 | 1.0 |
| 4.1683 | 5.59 | 14500 | 4.6076 | 1.0 |
| 4.1683 | 5.63 | 14600 | 4.5744 | 1.0 |
| 4.1683 | 5.67 | 14700 | 4.6165 | 1.0 |
| 4.1683 | 5.71 | 14800 | 4.5068 | 1.0 |
| 4.1683 | 5.74 | 14900 | 4.5309 | 1.0 |
| 4.1641 | 5.78 | 15000 | 4.6560 | 1.0 |
| 4.1641 | 5.82 | 15100 | 4.4879 | 1.0 |
| 4.1641 | 5.86 | 15200 | 4.6260 | 1.0 |
| 4.1641 | 5.9 | 15300 | 4.5928 | 1.0 |
| 4.1641 | 5.94 | 15400 | 4.6322 | 1.0 |
| 4.1617 | 5.98 | 15500 | 4.5097 | 1.0 |
| 4.1617 | 6.01 | 15600 | 4.6016 | 1.0 |
| 4.1617 | 6.05 | 15700 | 4.6230 | 1.0 |
| 4.1617 | 6.09 | 15800 | 4.5161 | 1.0 |
| 4.1617 | 6.13 | 15900 | 4.5222 | 1.0 |
| 4.1568 | 6.17 | 16000 | 4.6096 | 1.0 |
| 4.1568 | 6.21 | 16100 | 4.5736 | 1.0 |
| 4.1568 | 6.25 | 16200 | 4.5205 | 1.0 |
| 4.1568 | 6.28 | 16300 | 4.5903 | 1.0 |
| 4.1568 | 6.32 | 16400 | 4.5593 | 1.0 |
| 4.1525 | 6.36 | 16500 | 4.5578 | 1.0 |
| 4.1525 | 6.4 | 16600 | 4.5292 | 1.0 |
| 4.1525 | 6.44 | 16700 | 4.5798 | 1.0 |
| 4.1525 | 6.48 | 16800 | 4.5644 | 1.0 |
| 4.1525 | 6.52 | 16900 | 4.5598 | 1.0 |
| 4.179 | 6.55 | 17000 | 4.5607 | 1.0 |
| 4.179 | 6.59 | 17100 | 4.6244 | 1.0 |
| 4.179 | 6.63 | 17200 | 4.5471 | 1.0 |
| 4.179 | 6.67 | 17300 | 4.5280 | 1.0 |
| 4.179 | 6.71 | 17400 | 4.5295 | 1.0 |
| 4.1605 | 6.75 | 17500 | 4.5429 | 1.0 |
| 4.1605 | 6.78 | 17600 | 4.6017 | 1.0 |
| 4.1605 | 6.82 | 17700 | 4.5794 | 1.0 |
| 4.1605 | 6.86 | 17800 | 4.5590 | 1.0 |
| 4.1605 | 6.9 | 17900 | 4.6594 | 1.0 |
| 4.1751 | 6.94 | 18000 | 4.5115 | 1.0 |
| 4.1751 | 6.98 | 18100 | 4.5160 | 1.0 |
| 4.1751 | 7.02 | 18200 | 4.5009 | 1.0 |
| 4.1751 | 7.05 | 18300 | 4.5132 | 1.0 |
| 4.1751 | 7.09 | 18400 | 4.5412 | 1.0 |
| 4.141 | 7.13 | 18500 | 4.6033 | 1.0 |
| 4.141 | 7.17 | 18600 | 4.5443 | 1.0 |
| 4.141 | 7.21 | 18700 | 4.5490 | 1.0 |
| 4.141 | 7.25 | 18800 | 4.6165 | 1.0 |
| 4.141 | 7.29 | 18900 | 4.5632 | 1.0 |
| 4.1622 | 7.32 | 19000 | 4.5154 | 1.0 |
| 4.1622 | 7.36 | 19100 | 4.5596 | 1.0 |
| 4.1622 | 7.4 | 19200 | 4.5795 | 1.0 |
| 4.1622 | 7.44 | 19300 | 4.5098 | 1.0 |
| 4.1622 | 7.48 | 19400 | 4.5631 | 1.0 |
| 4.1382 | 7.52 | 19500 | 4.5778 | 1.0 |
| 4.1382 | 7.56 | 19600 | 4.5816 | 1.0 |
| 4.1382 | 7.59 | 19700 | 4.5228 | 1.0 |
| 4.1382 | 7.63 | 19800 | 4.5351 | 1.0 |
| 4.1382 | 7.67 | 19900 | 4.5684 | 1.0 |
| 4.1672 | 7.71 | 20000 | 4.5363 | 1.0 |
| 4.1672 | 7.75 | 20100 | 4.5975 | 1.0 |
| 4.1672 | 7.79 | 20200 | 4.5493 | 1.0 |
| 4.1672 | 7.83 | 20300 | 4.5319 | 1.0 |
| 4.1672 | 7.86 | 20400 | 4.5349 | 1.0 |
| 4.171 | 7.9 | 20500 | 4.5395 | 1.0 |
| 4.171 | 7.94 | 20600 | 4.5121 | 1.0 |
| 4.171 | 7.98 | 20700 | 4.5475 | 1.0 |
| 4.171 | 8.02 | 20800 | 4.5278 | 1.0 |
| 4.171 | 8.06 | 20900 | 4.4753 | 1.0 |
| 4.1569 | 8.1 | 21000 | 4.5646 | 1.0 |
| 4.1569 | 8.13 | 21100 | 4.5385 | 1.0 |
| 4.1569 | 8.17 | 21200 | 4.5302 | 1.0 |
| 4.1569 | 8.21 | 21300 | 4.4978 | 1.0 |
| 4.1569 | 8.25 | 21400 | 4.5313 | 1.0 |
| 4.1708 | 8.29 | 21500 | 4.5062 | 1.0 |
| 4.1708 | 8.33 | 21600 | 4.5365 | 1.0 |
| 4.1708 | 8.37 | 21700 | 4.4771 | 1.0 |
| 4.1708 | 8.4 | 21800 | 4.5899 | 1.0 |
| 4.1708 | 8.44 | 21900 | 4.5667 | 1.0 |
| 4.1756 | 8.48 | 22000 | 4.5478 | 1.0 |
| 4.1756 | 8.52 | 22100 | 4.4779 | 1.0 |
| 4.1756 | 8.56 | 22200 | 4.5599 | 1.0 |
| 4.1756 | 8.6 | 22300 | 4.4761 | 1.0 |
| 4.1756 | 8.64 | 22400 | 4.5487 | 1.0 |
| 4.1541 | 8.67 | 22500 | 4.5875 | 1.0 |
| 4.1541 | 8.71 | 22600 | 4.5143 | 1.0 |
| 4.1541 | 8.75 | 22700 | 4.5563 | 1.0 |
| 4.1541 | 8.79 | 22800 | 4.5486 | 1.0 |
| 4.1541 | 8.83 | 22900 | 4.5503 | 1.0 |
| 4.1424 | 8.87 | 23000 | 4.5155 | 1.0 |
| 4.1424 | 8.91 | 23100 | 4.5571 | 1.0 |
| 4.1424 | 8.94 | 23200 | 4.5171 | 1.0 |
| 4.1424 | 8.98 | 23300 | 4.5438 | 1.0 |
| 4.1424 | 9.02 | 23400 | 4.4792 | 1.0 |
| 4.154 | 9.06 | 23500 | 4.5598 | 1.0 |
| 4.154 | 9.1 | 23600 | 4.5516 | 1.0 |
| 4.154 | 9.14 | 23700 | 4.5512 | 1.0 |
| 4.154 | 9.18 | 23800 | 4.5392 | 1.0 |
| 4.154 | 9.21 | 23900 | 4.5474 | 1.0 |
| 4.1573 | 9.25 | 24000 | 4.5573 | 1.0 |
| 4.1573 | 9.29 | 24100 | 4.5027 | 1.0 |
| 4.1573 | 9.33 | 24200 | 4.5666 | 1.0 |
| 4.1573 | 9.37 | 24300 | 4.4937 | 1.0 |
| 4.1573 | 9.41 | 24400 | 4.5804 | 1.0 |
| 4.1503 | 9.44 | 24500 | 4.5279 | 1.0 |
| 4.1503 | 9.48 | 24600 | 4.4996 | 1.0 |
| 4.1503 | 9.52 | 24700 | 4.5425 | 1.0 |
| 4.1503 | 9.56 | 24800 | 4.5450 | 1.0 |
| 4.1503 | 9.6 | 24900 | 4.5324 | 1.0 |
| 4.1342 | 9.64 | 25000 | 4.5407 | 1.0 |
| 4.1342 | 9.68 | 25100 | 4.5086 | 1.0 |
| 4.1342 | 9.71 | 25200 | 4.5052 | 1.0 |
| 4.1342 | 9.75 | 25300 | 4.5422 | 1.0 |
| 4.1342 | 9.79 | 25400 | 4.6180 | 1.0 |
| 4.1745 | 9.83 | 25500 | 4.5330 | 1.0 |
| 4.1745 | 9.87 | 25600 | 4.5291 | 1.0 |
| 4.1745 | 9.91 | 25700 | 4.5389 | 1.0 |
| 4.1745 | 9.95 | 25800 | 4.4924 | 1.0 |
| 4.1745 | 9.98 | 25900 | 4.5381 | 1.0 |
| 4.1648 | 10.02 | 26000 | 4.5042 | 1.0 |
| 4.1648 | 10.06 | 26100 | 4.5266 | 1.0 |
| 4.1648 | 10.1 | 26200 | 4.5485 | 1.0 |
| 4.1648 | 10.14 | 26300 | 4.5485 | 1.0 |
| 4.1648 | 10.18 | 26400 | 4.5801 | 1.0 |
| 4.1616 | 10.22 | 26500 | 4.5290 | 1.0 |
| 4.1616 | 10.25 | 26600 | 4.5198 | 1.0 |
| 4.1616 | 10.29 | 26700 | 4.5473 | 1.0 |
| 4.1616 | 10.33 | 26800 | 4.5708 | 1.0 |
| 4.1616 | 10.37 | 26900 | 4.5493 | 1.0 |
| 4.1643 | 10.41 | 27000 | 4.5487 | 1.0 |
| 4.1643 | 10.45 | 27100 | 4.5729 | 1.0 |
| 4.1643 | 10.49 | 27200 | 4.5831 | 1.0 |
| 4.1643 | 10.52 | 27300 | 4.5384 | 1.0 |
| 4.1643 | 10.56 | 27400 | 4.5377 | 1.0 |
| 4.1339 | 10.6 | 27500 | 4.5132 | 1.0 |
| 4.1339 | 10.64 | 27600 | 4.5547 | 1.0 |
| 4.1339 | 10.68 | 27700 | 4.5343 | 1.0 |
| 4.1339 | 10.72 | 27800 | 4.5886 | 1.0 |
| 4.1339 | 10.76 | 27900 | 4.5708 | 1.0 |
| 4.15 | 10.79 | 28000 | 4.5498 | 1.0 |
| 4.15 | 10.83 | 28100 | 4.5415 | 1.0 |
| 4.15 | 10.87 | 28200 | 4.5294 | 1.0 |
| 4.15 | 10.91 | 28300 | 4.5261 | 1.0 |
| 4.15 | 10.95 | 28400 | 4.5713 | 1.0 |
| 4.1442 | 10.99 | 28500 | 4.5613 | 1.0 |
| 4.1442 | 11.03 | 28600 | 4.5691 | 1.0 |
| 4.1442 | 11.06 | 28700 | 4.5553 | 1.0 |
| 4.1442 | 11.1 | 28800 | 4.5442 | 1.0 |
| 4.1442 | 11.14 | 28900 | 4.5930 | 1.0 |
| 4.1304 | 11.18 | 29000 | 4.5713 | 1.0 |
| 4.1304 | 11.22 | 29100 | 4.5810 | 1.0 |
| 4.1304 | 11.26 | 29200 | 4.5594 | 1.0 |
| 4.1304 | 11.3 | 29300 | 4.5515 | 1.0 |
| 4.1304 | 11.33 | 29400 | 4.5566 | 1.0 |
| 4.1402 | 11.37 | 29500 | 4.5712 | 1.0 |
| 4.1402 | 11.41 | 29600 | 4.5697 | 1.0 |
| 4.1402 | 11.45 | 29700 | 4.5513 | 1.0 |
| 4.1402 | 11.49 | 29800 | 4.5739 | 1.0 |
| 4.1402 | 11.53 | 29900 | 4.6001 | 1.0 |
| 4.195 | 11.57 | 30000 | 4.5735 | 1.0 |
| 4.195 | 11.6 | 30100 | 4.5603 | 1.0 |
| 4.195 | 11.64 | 30200 | 4.5702 | 1.0 |
| 4.195 | 11.68 | 30300 | 4.5459 | 1.0 |
| 4.195 | 11.72 | 30400 | 4.5508 | 1.0 |
| 4.135 | 11.76 | 30500 | 4.5375 | 1.0 |
| 4.135 | 11.8 | 30600 | 4.5250 | 1.0 |
| 4.135 | 11.84 | 30700 | 4.5578 | 1.0 |
| 4.135 | 11.87 | 30800 | 4.5256 | 1.0 |
| 4.135 | 11.91 | 30900 | 4.5443 | 1.0 |
| 4.1696 | 11.95 | 31000 | 4.5366 | 1.0 |
| 4.1696 | 11.99 | 31100 | 4.5574 | 1.0 |
| 4.1696 | 12.03 | 31200 | 4.5514 | 1.0 |
| 4.1696 | 12.07 | 31300 | 4.5625 | 1.0 |
| 4.1696 | 12.1 | 31400 | 4.5378 | 1.0 |
| 4.1405 | 12.14 | 31500 | 4.5773 | 1.0 |
| 4.1405 | 12.18 | 31600 | 4.5503 | 1.0 |
| 4.1405 | 12.22 | 31700 | 4.5753 | 1.0 |
| 4.1405 | 12.26 | 31800 | 4.5475 | 1.0 |
| 4.1405 | 12.3 | 31900 | 4.5768 | 1.0 |
| 4.1778 | 12.34 | 32000 | 4.5732 | 1.0 |
| 4.1778 | 12.37 | 32100 | 4.5518 | 1.0 |
| 4.1778 | 12.41 | 32200 | 4.5752 | 1.0 |
| 4.1778 | 12.45 | 32300 | 4.5878 | 1.0 |
| 4.1778 | 12.49 | 32400 | 4.5367 | 1.0 |
| 4.1242 | 12.53 | 32500 | 4.5586 | 1.0 |
| 4.1242 | 12.57 | 32600 | 4.5744 | 1.0 |
| 4.1242 | 12.61 | 32700 | 4.5373 | 1.0 |
| 4.1242 | 12.64 | 32800 | 4.5624 | 1.0 |
| 4.1242 | 12.68 | 32900 | 4.5548 | 1.0 |
| 4.1441 | 12.72 | 33000 | 4.5713 | 1.0 |
| 4.1441 | 12.76 | 33100 | 4.5437 | 1.0 |
| 4.1441 | 12.8 | 33200 | 4.5435 | 1.0 |
| 4.1441 | 12.84 | 33300 | 4.5863 | 1.0 |
| 4.1441 | 12.88 | 33400 | 4.5653 | 1.0 |
| 4.1563 | 12.91 | 33500 | 4.5587 | 1.0 |
| 4.1563 | 12.95 | 33600 | 4.5508 | 1.0 |
| 4.1563 | 12.99 | 33700 | 4.5376 | 1.0 |
| 4.1563 | 13.03 | 33800 | 4.5343 | 1.0 |
| 4.1563 | 13.07 | 33900 | 4.5554 | 1.0 |
| 4.1531 | 13.11 | 34000 | 4.5575 | 1.0 |
| 4.1531 | 13.15 | 34100 | 4.5525 | 1.0 |
| 4.1531 | 13.18 | 34200 | 4.5654 | 1.0 |
| 4.1531 | 13.22 | 34300 | 4.5674 | 1.0 |
| 4.1531 | 13.26 | 34400 | 4.5382 | 1.0 |
| 4.1443 | 13.3 | 34500 | 4.5145 | 1.0 |
| 4.1443 | 13.34 | 34600 | 4.5612 | 1.0 |
| 4.1443 | 13.38 | 34700 | 4.5411 | 1.0 |
| 4.1443 | 13.42 | 34800 | 4.5757 | 1.0 |
| 4.1443 | 13.45 | 34900 | 4.5439 | 1.0 |
| 4.1453 | 13.49 | 35000 | 4.5571 | 1.0 |
| 4.1453 | 13.53 | 35100 | 4.5523 | 1.0 |
| 4.1453 | 13.57 | 35200 | 4.5563 | 1.0 |
| 4.1453 | 13.61 | 35300 | 4.5610 | 1.0 |
| 4.1453 | 13.65 | 35400 | 4.5749 | 1.0 |
| 4.1227 | 13.69 | 35500 | 4.5580 | 1.0 |
| 4.1227 | 13.72 | 35600 | 4.5739 | 1.0 |
| 4.1227 | 13.76 | 35700 | 4.5472 | 1.0 |
| 4.1227 | 13.8 | 35800 | 4.5956 | 1.0 |
| 4.1227 | 13.84 | 35900 | 4.5676 | 1.0 |
| 4.1778 | 13.88 | 36000 | 4.5454 | 1.0 |
| 4.1778 | 13.92 | 36100 | 4.5527 | 1.0 |
| 4.1778 | 13.96 | 36200 | 4.5483 | 1.0 |
| 4.1778 | 13.99 | 36300 | 4.5611 | 1.0 |
| 4.1778 | 14.03 | 36400 | 4.5552 | 1.0 |
| 4.1532 | 14.07 | 36500 | 4.5607 | 1.0 |
| 4.1532 | 14.11 | 36600 | 4.5542 | 1.0 |
| 4.1532 | 14.15 | 36700 | 4.5544 | 1.0 |
| 4.1532 | 14.19 | 36800 | 4.5486 | 1.0 |
| 4.1532 | 14.23 | 36900 | 4.5616 | 1.0 |
| 4.1485 | 14.26 | 37000 | 4.5575 | 1.0 |
| 4.1485 | 14.3 | 37100 | 4.5511 | 1.0 |
| 4.1485 | 14.34 | 37200 | 4.5432 | 1.0 |
| 4.1485 | 14.38 | 37300 | 4.5428 | 1.0 |
| 4.1485 | 14.42 | 37400 | 4.5501 | 1.0 |
| 4.1402 | 14.46 | 37500 | 4.5544 | 1.0 |
| 4.1402 | 14.49 | 37600 | 4.5502 | 1.0 |
| 4.1402 | 14.53 | 37700 | 4.5512 | 1.0 |
| 4.1402 | 14.57 | 37800 | 4.5572 | 1.0 |
| 4.1402 | 14.61 | 37900 | 4.5567 | 1.0 |
| 4.182 | 14.65 | 38000 | 4.5473 | 1.0 |
| 4.182 | 14.69 | 38100 | 4.5556 | 1.0 |
| 4.182 | 14.73 | 38200 | 4.5576 | 1.0 |
| 4.182 | 14.76 | 38300 | 4.5512 | 1.0 |
| 4.182 | 14.8 | 38400 | 4.5555 | 1.0 |
| 4.1144 | 14.84 | 38500 | 4.5595 | 1.0 |
| 4.1144 | 14.88 | 38600 | 4.5639 | 1.0 |
| 4.1144 | 14.92 | 38700 | 4.5642 | 1.0 |
| 4.1144 | 14.96 | 38800 | 4.5601 | 1.0 |
| 4.1144 | 15.0 | 38900 | 4.5598 | 1.0 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| 2554c88fc86e37eb6cc18578cd6143ec |
Neulvo/bert-finetuned-ner | Neulvo | bert | 14 | 9 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,513 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0793
- Precision: 0.9358
- Recall: 0.9510
- F1: 0.9433
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0247 | 1.0 | 1756 | 0.0798 | 0.9269 | 0.9435 | 0.9351 | 0.9840 |
| 0.0136 | 2.0 | 3512 | 0.0776 | 0.9309 | 0.9495 | 0.9401 | 0.9857 |
| 0.0097 | 3.0 | 5268 | 0.0793 | 0.9358 | 0.9510 | 0.9433 | 0.9862 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
| f45440eda7e5716daaf594059e2e0d67 |
Dimitre/universal-sentence-encoder | Dimitre | null | 5 | 0 | tfhub | 2 | sentence-similarity | false | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text', 'sentence-similarity', 'use', 'universal-sentence-encoder', 'dan', 'tensorflow'] | false | true | true | 5,109 | false |
## Model name: universal-sentence-encoder
## Description adapted from [TFHub](https://tfhub.dev/google/universal-sentence-encoder/4)
# Overview
The Universal Sentence Encoder encodes text into high-dimensional vectors that can be used for text classification, semantic similarity, clustering and other natural language tasks.
The model is trained and optimized for greater-than-word length text, such as sentences, phrases or short paragraphs. It is trained on a variety of data sources and a variety of tasks with the aim of dynamically accommodating a wide variety of natural language understanding tasks. The input is variable length English text and the output is a 512 dimensional vector. We apply this model to the [STS benchmark](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) for semantic similarity, and the results can be seen in the [example notebook](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb) made available. The universal-sentence-encoder model is trained with a deep averaging network (DAN) encoder.
To learn more about text embeddings, refer to the [TensorFlow Embeddings](https://www.tensorflow.org/tutorials/text/word_embeddings) documentation. Our encoder differs from word level embedding models in that we train on a number of natural language prediction tasks that require modeling the meaning of word sequences rather than just individual words. Details are available in the paper "Universal Sentence Encoder" [1].
## Universal Sentence Encoder family
There are several versions of universal sentence encoder models trained with different goals including size/performance multilingual, and fine-grained question answer retrieval.
- [Universal Sentence Encoder family](https://tfhub.dev/google/collections/universal-sentence-encoder/1)
### Example use
### Using TF Hub and HF Hub
```
model_path = snapshot_download(repo_id="Dimitre/universal-sentence-encoder")
model = KerasLayer(handle=model_path)
embeddings = model([
"The quick brown fox jumps over the lazy dog.",
"I am a sentence for which I would like to get its embedding"])
print(embeddings)
# The following are example embedding output of 512 dimensions per sentence
# Embedding for: The quick brown fox jumps over the lazy dog.
# [-0.03133016 -0.06338634 -0.01607501, ...]
# Embedding for: I am a sentence for which I would like to get its embedding.
# [0.05080863 -0.0165243 0.01573782, ...]
```
### Using [TF Hub fork](https://github.com/dimitreOliveira/hub)
```
model = pull_from_hub(repo_id="Dimitre/universal-sentence-encoder")
embeddings = model([
"The quick brown fox jumps over the lazy dog.",
"I am a sentence for which I would like to get its embedding"])
print(embeddings)
# The following are example embedding output of 512 dimensions per sentence
# Embedding for: The quick brown fox jumps over the lazy dog.
# [-0.03133016 -0.06338634 -0.01607501, ...]
# Embedding for: I am a sentence for which I would like to get its embedding.
# [0.05080863 -0.0165243 0.01573782, ...]
```
This module is about 1GB. Depending on your network speed, it might take a while to load the first time you run inference with it. After that, loading the model should be faster as modules are cached by default ([learn more about caching](https://www.tensorflow.org/hub/tf2_saved_model)). Further, once a module is loaded to memory, inference time should be relatively fast.
### Preprocessing
The module does not require preprocessing the data before applying the module, it performs best effort text input preprocessing inside the graph.
# Semantic Similarity
![Semantic Similarity Graphic](https://www.gstatic.com/aihub/tfhub/universal-sentence-encoder/example-similarity.png)
Semantic similarity is a measure of the degree to which two pieces of text carry the same meaning. This is broadly useful in obtaining good coverage over the numerous ways that a thought can be expressed using language without needing to manually enumerate them.
Simple applications include improving the coverage of systems that trigger behaviors on certain keywords, phrases or utterances. [This section of the notebook](https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb#scrollTo=BnvjATdy64eR) shows how to encode text and compare encoding distances as a proxy for semantic similarity.
# Classification
![Text Classification Graphic](https://www.gstatic.com/aihub/tfhub/universal-sentence-encoder/example-classification.png)
[This notebook](https://colab.research.google.com/github/tensorflow/hub/blob/master/docs/tutorials/text_classification_with_tf_hub.ipynb) shows how to train a simple binary text classifier on top of any TF-Hub module that can embed sentences. The Universal Sentence Encoder was partially trained with custom text classification tasks in mind. These kinds of classifiers can be trained to perform a wide variety of classification tasks often with a very small amount of labeled examples. | 1dd0f2ca7161e598db3a76ae284d99ff |
loicmagne/pr_dataset_metadata | loicmagne | null | 10 | 0 | null | 0 | null | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,142 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pr_dataset_metadata
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6216
- eval_accuracy: 1.0
- eval_runtime: 0.4472
- eval_samples_per_second: 2.236
- eval_steps_per_second: 2.236
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: not_parallel
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.1
| 645dbaa776a02fdc6a93ee56444545c8 |
aaya/distilbert-base-uncased-finetuned-ner | aaya | distilbert | 13 | 12 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 927 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| c7da53eabd45ffe6737f62d659f0b8a3 |
AlekseyKorshuk/6.7b-ri-reproduce-combined-4-gpu-0-val-v2 | AlekseyKorshuk | opt | 13 | 4 | transformers | 0 | text-generation | true | false | false | other | null | ['ChaiML/dalio_combined_v1'] | null | 16 | 16 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,073 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-ri-reproduce-combined-4-gpu-0-val-v2
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the ChaiML/dalio_combined_v1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 100
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| cf8cda039a50c7283c5bb56e5a39f23d |
coreml/coreml-waifu-diffusion-v1-4 | coreml | null | 4 | 0 | null | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['coreml', 'stable-diffusion', 'text-to-image'] | false | true | true | 2,855 | false |
# Core ML Converted Model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br>
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
`split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
wd-1-4-anime_e2_split-einsum.zip contains TextEncoder of wd-1-4-anime_e1_split-einsum.zip.<br>
![image](https://user-images.githubusercontent.com/26317155/210155933-db3a5f1a-1ec3-4777-915c-6deff2841ce9.png)
<sub>masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck</sub>
# Waifu Diffusion v1.4
Waifu Diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
- [Waifu Diffusion 1.4 Anime Epoch 1](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e1.ckpt): A test model made to properly ensure that the training setup works.
- [Waifu Diffusion 1.4 Anime Inference Config](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/wd-1-4-anime_e1.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by Stability AI and NovelAI.
- [Haru](https://github.com/harubaru)
- [Salt](https://github.com/sALTaccount/)
- [Cafe](https://twitter.com/cafeai_labs)
In order to reach us, you can join our [Discord server](https://discord.gg/touhouai).
[![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai) | 46f59d9da0818c5c95fb23c7fce5f9b4 |
gokuls/mobilebert_add_GLUE_Experiment_mnli_128 | gokuls | mobilebert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,844 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_mnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0985
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0987 | 1.0 | 3068 | 1.0987 | 0.3182 |
| 1.0986 | 2.0 | 6136 | 1.0986 | 0.3182 |
| 1.0986 | 3.0 | 9204 | 1.0988 | 0.3274 |
| 1.0986 | 4.0 | 12272 | 1.0986 | 0.3182 |
| 1.0986 | 5.0 | 15340 | 1.0985 | 0.3545 |
| 1.0986 | 6.0 | 18408 | 1.0987 | 0.3274 |
| 1.0986 | 7.0 | 21476 | 1.0988 | 0.3274 |
| 1.0986 | 8.0 | 24544 | 1.0986 | 0.3545 |
| 1.0986 | 9.0 | 27612 | 1.0986 | 0.3545 |
| 1.0986 | 10.0 | 30680 | 1.0987 | 0.3182 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| af581471be49956a1c07cc2d117a5c3e |
garyw/clinical-embeddings-100d-ft-oa-all | garyw | null | 9 | 0 | null | 0 | null | false | false | false | gpl-3.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,542 | false |
Pre-trained word embeddings using the text of published scientific manuscripts. These embeddings use 100 dimensions and were trained using the fasttext algorithm on all available manuscripts found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/
Citation:
```
@article{flamholz2022word,
title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information},
author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E},
journal={Journal of Biomedical Informatics},
volume={125},
pages={103971},
year={2022},
publisher={Elsevier}
}
```
## Quick start
Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format.
First download the files from this archive. Then load the embeddings into Python.
```python
from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models
# Load the model
model = FastText.load('ft_oa_all_100d.bin')
# Return 100-dimensional vector representations of each word
model.wv.word_vec('diabetes')
model.wv.word_vec('cardiac_arrest')
model.wv.word_vec('lymphangioleiomyomatosis')
# Try out cosine similarity
model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease')
model.wv.similarity('myocardial_infarction', 'heart_attack')
model.wv.similarity('lymphangioleiomyomatosis', 'lam')
``` | e22cee832299353f1274fb291a21ca24 |
kpriyanshu256/whisper-medium-as-100-32-1e-05-pretrain-bn | kpriyanshu256 | whisper | 17 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['as'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,415 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium-Assamese
This model is a fine-tuned version of [kpriyanshu256/whisper-medium-as-200-32-1e-05-bn](https://huggingface.co/kpriyanshu256/whisper-medium-as-200-32-1e-05-bn) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2247
- Wer: 25.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0864 | 3.13 | 100 | 0.2247 | 25.6271 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0
- Datasets 2.7.1.dev0
- Tokenizers 0.12.1
| 446b6b871b331df195fc344a53b29781 |
aychang/bert-base-cased-trec-coarse | aychang | bert | 9 | 253 | transformers | 0 | text-classification | true | false | true | mit | ['en'] | ['trec'] | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 | ['text-classification'] | true | true | true | 2,143 | false |
# bert-base-cased trained on TREC 6-class task
## Model description
A simple base BERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/bert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/bert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
save_steps=3000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.974,
'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708,
0.98159509]),
'eval_loss': 0.138086199760437,
'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667,
0.97560976]),
'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. ,
0.98765432]),
'eval_runtime': 1.6132,
'eval_samples_per_second': 309.943}
```
| 0525bf127c1c73e6851085df2f8a220e |
jonatasgrosman/exp_w2v2t_th_unispeech_s624 | jonatasgrosman | unispeech | 10 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 472 | false | # exp_w2v2t_th_unispeech_s624
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 050d069cb255e16fdde892a0acc6d4bc |
SebastianS/codeparrot-ds | SebastianS | gpt2 | 17 | 2 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,250 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 300
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7149 | 0.85 | 1000 | 2.4905 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| e47c54026246e29576f3b779341fcd5b |
naksu/distilbert-base-uncased-finetuned-cola | naksu | distilbert | 44 | 12 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7831
- Matthews Correlation: 0.5374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5259 | 1.0 | 535 | 0.5289 | 0.4280 |
| 0.3473 | 2.0 | 1070 | 0.5123 | 0.4783 |
| 0.237 | 3.0 | 1605 | 0.5517 | 0.5235 |
| 0.1736 | 4.0 | 2140 | 0.7831 | 0.5374 |
| 0.1255 | 5.0 | 2675 | 0.8109 | 0.5361 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 97d8c07d7a898a08fadd1bab69aa79a0 |
Stc043/distilbert-base-multilingual-cased-finetuned-dakshina-ml | Stc043 | distilbert | 9 | 2 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,360 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-dakshina-ml
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5614 | 1.0 | 16 | 1.4559 |
| 1.4984 | 2.0 | 32 | 1.3131 |
| 1.3218 | 3.0 | 48 | 1.2369 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 6671ebc2c41724815266e29112bd26d1 |
JovialValley/model_syllable_onSet2 | JovialValley | wav2vec2 | 12 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 11,465 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_syllable_onSet2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- 0 Precision: 1.0
- 0 Recall: 0.96
- 0 F1-score: 0.9796
- 0 Support: 25
- 1 Precision: 0.9643
- 1 Recall: 0.9643
- 1 F1-score: 0.9643
- 1 Support: 28
- 2 Precision: 1.0
- 2 Recall: 0.9643
- 2 F1-score: 0.9818
- 2 Support: 28
- 3 Precision: 0.8889
- 3 Recall: 1.0
- 3 F1-score: 0.9412
- 3 Support: 16
- Accuracy: 0.9691
- Macro avg Precision: 0.9633
- Macro avg Recall: 0.9721
- Macro avg F1-score: 0.9667
- Macro avg Support: 97
- Weighted avg Precision: 0.9714
- Weighted avg Recall: 0.9691
- Weighted avg F1-score: 0.9695
- Weighted avg Support: 97
- Wer: 0.2827
- Mtrix: [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:--------------------------------------------------------------------------------------:|
| 1.3102 | 4.16 | 100 | 1.2133 | 0.125 | 0.04 | 0.0606 | 25 | 0.0 | 0.0 | 0.0 | 28 | 0.3146 | 1.0 | 0.4786 | 28 | 0.0 | 0.0 | 0.0 | 16 | 0.2990 | 0.1099 | 0.26 | 0.1348 | 97 | 0.1230 | 0.2990 | 0.1538 | 97 | 0.9676 | [[0, 1, 2, 3], [0, 1, 0, 24, 0], [1, 7, 0, 21, 0], [2, 0, 0, 28, 0], [3, 0, 0, 16, 0]] |
| 0.7368 | 8.33 | 200 | 0.7100 | 1.0 | 0.72 | 0.8372 | 25 | 0.3333 | 0.0357 | 0.0645 | 28 | 0.3684 | 1.0 | 0.5385 | 28 | 0.0 | 0.0 | 0.0 | 16 | 0.4845 | 0.4254 | 0.4389 | 0.3600 | 97 | 0.4603 | 0.4845 | 0.3898 | 97 | 0.8227 | [[0, 1, 2, 3], [0, 18, 2, 5, 0], [1, 0, 1, 27, 0], [2, 0, 0, 28, 0], [3, 0, 0, 16, 0]] |
| 0.3813 | 12.49 | 300 | 0.3802 | 0.8519 | 0.92 | 0.8846 | 25 | 0.7333 | 0.7857 | 0.7586 | 28 | 0.9231 | 0.8571 | 0.8889 | 28 | 0.9286 | 0.8125 | 0.8667 | 16 | 0.8454 | 0.8592 | 0.8438 | 0.8497 | 97 | 0.8509 | 0.8454 | 0.8465 | 97 | 0.7694 | [[0, 1, 2, 3], [0, 23, 2, 0, 0], [1, 4, 22, 2, 0], [2, 0, 3, 24, 1], [3, 0, 3, 0, 13]] |
| 0.2761 | 16.65 | 400 | 0.2263 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9643 | 0.9818 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9794 | 0.9722 | 0.9821 | 0.9762 | 97 | 0.9817 | 0.9794 | 0.9798 | 97 | 0.4392 | [[0, 1, 2, 3], [0, 25, 0, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.1596 | 20.82 | 500 | 0.2283 | 1.0 | 0.96 | 0.9796 | 25 | 0.9310 | 0.9643 | 0.9474 | 28 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.9375 | 0.9375 | 0.9375 | 16 | 0.9588 | 0.9582 | 0.9565 | 0.9572 | 97 | 0.9595 | 0.9588 | 0.9589 | 97 | 0.4971 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 1, 0], [2, 0, 0, 27, 1], [3, 0, 1, 0, 15]] |
| 0.124 | 24.98 | 600 | 0.1841 | 1.0 | 0.96 | 0.9796 | 25 | 0.9655 | 1.0 | 0.9825 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9794 | 0.9767 | 0.9811 | 0.9784 | 97 | 0.9803 | 0.9794 | 0.9794 | 97 | 0.2955 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 28, 0, 0], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.1162 | 29.16 | 700 | 0.2286 | 1.0 | 0.96 | 0.9796 | 25 | 0.9333 | 1.0 | 0.9655 | 28 | 1.0 | 0.9286 | 0.9630 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9691 | 0.9686 | 0.9721 | 0.9694 | 97 | 0.9711 | 0.9691 | 0.9691 | 97 | 0.3627 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 28, 0, 0], [2, 0, 1, 26, 1], [3, 0, 0, 0, 16]] |
| 0.1576 | 33.33 | 800 | 0.2259 | 1.0 | 0.92 | 0.9583 | 25 | 0.9333 | 1.0 | 0.9655 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9691 | 0.9686 | 0.9711 | 0.9688 | 97 | 0.9711 | 0.9691 | 0.9691 | 97 | 0.3210 | [[0, 1, 2, 3], [0, 23, 2, 0, 0], [1, 0, 28, 0, 0], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0957 | 37.49 | 900 | 0.2757 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.9412 | 1.0 | 0.9697 | 16 | 0.9691 | 0.9674 | 0.9721 | 0.9695 | 97 | 0.9697 | 0.9691 | 0.9691 | 97 | 0.3499 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 1, 0], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.1145 | 41.65 | 1000 | 0.2951 | 1.0 | 0.96 | 0.9796 | 25 | 1.0 | 0.9643 | 0.9818 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8421 | 1.0 | 0.9143 | 16 | 0.9691 | 0.9605 | 0.9721 | 0.9644 | 97 | 0.9740 | 0.9691 | 0.9701 | 97 | 0.3024 | [[0, 1, 2, 3], [0, 24, 0, 0, 1], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.121 | 45.82 | 1100 | 0.3262 | 1.0 | 0.96 | 0.9796 | 25 | 1.0 | 0.9643 | 0.9818 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8421 | 1.0 | 0.9143 | 16 | 0.9691 | 0.9605 | 0.9721 | 0.9644 | 97 | 0.9740 | 0.9691 | 0.9701 | 97 | 0.2885 | [[0, 1, 2, 3], [0, 24, 0, 0, 1], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.079 | 49.98 | 1200 | 0.3615 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.3615 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0733 | 54.16 | 1300 | 0.3891 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.3082 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0962 | 58.33 | 1400 | 0.3620 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.2851 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0628 | 62.49 | 1500 | 0.4084 | 1.0 | 0.96 | 0.9796 | 25 | 0.9630 | 0.9286 | 0.9455 | 28 | 0.9643 | 0.9643 | 0.9643 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9588 | 0.9540 | 0.9632 | 0.9576 | 97 | 0.9607 | 0.9588 | 0.9590 | 97 | 0.3001 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 26, 1, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
| 0.0675 | 66.65 | 1600 | 0.4231 | 1.0 | 0.96 | 0.9796 | 25 | 0.9643 | 0.9643 | 0.9643 | 28 | 1.0 | 0.9643 | 0.9818 | 28 | 0.8889 | 1.0 | 0.9412 | 16 | 0.9691 | 0.9633 | 0.9721 | 0.9667 | 97 | 0.9714 | 0.9691 | 0.9695 | 97 | 0.2827 | [[0, 1, 2, 3], [0, 24, 1, 0, 0], [1, 0, 27, 0, 1], [2, 0, 0, 27, 1], [3, 0, 0, 0, 16]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| cced43e8be7f5d6cbff33210fe22d723 |
nandysoham/2008_Sichuan_earthquake-clustered | nandysoham | distilbert | 8 | 2 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,879 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/2008_Sichuan_earthquake-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4560
- Train End Logits Accuracy: 0.8854
- Train Start Logits Accuracy: 0.7882
- Validation Loss: 0.3706
- Validation End Logits Accuracy: 0.8947
- Validation Start Logits Accuracy: 0.8947
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4560 | 0.8854 | 0.7882 | 0.3706 | 0.8947 | 0.8947 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 26cac989496c9578c30d6caa35196643 |
DrishtiSharma/finetuned-SwinT-Indian-Food-Classification-v3 | DrishtiSharma | swin | 19 | 13 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'generated_from_trainer'] | true | true | true | 3,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-SwinT-Indian-Food-Classification-v3
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the Indian-Food-Images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2910
- Accuracy: 0.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9511 | 0.3 | 100 | 0.6092 | 0.8172 |
| 0.6214 | 0.6 | 200 | 0.4406 | 0.8672 |
| 0.7355 | 0.9 | 300 | 0.3665 | 0.8927 |
| 0.6078 | 1.2 | 400 | 0.3285 | 0.9065 |
| 0.439 | 1.5 | 500 | 0.3855 | 0.8916 |
| 0.3644 | 1.8 | 600 | 0.4082 | 0.8969 |
| 0.4748 | 2.1 | 700 | 0.3496 | 0.9022 |
| 0.3966 | 2.4 | 800 | 0.3626 | 0.8905 |
| 0.5799 | 2.7 | 900 | 0.4833 | 0.8767 |
| 0.2995 | 3.0 | 1000 | 0.3387 | 0.9044 |
| 0.3152 | 3.3 | 1100 | 0.3739 | 0.9097 |
| 0.3284 | 3.6 | 1200 | 0.4217 | 0.8916 |
| 0.3631 | 3.9 | 1300 | 0.4118 | 0.9044 |
| 0.219 | 4.2 | 1400 | 0.3721 | 0.9139 |
| 0.2874 | 4.5 | 1500 | 0.3030 | 0.9288 |
| 0.2819 | 4.8 | 1600 | 0.4056 | 0.9150 |
| 0.1755 | 5.11 | 1700 | 0.4039 | 0.9097 |
| 0.2462 | 5.41 | 1800 | 0.3550 | 0.9118 |
| 0.1737 | 5.71 | 1900 | 0.3444 | 0.9150 |
| 0.174 | 6.01 | 2000 | 0.3667 | 0.9160 |
| 0.1536 | 6.31 | 2100 | 0.3301 | 0.9288 |
| 0.0911 | 6.61 | 2200 | 0.3390 | 0.9299 |
| 0.0907 | 6.91 | 2300 | 0.2923 | 0.9288 |
| 0.0921 | 7.21 | 2400 | 0.3502 | 0.9256 |
| 0.1662 | 7.51 | 2500 | 0.3197 | 0.9341 |
| 0.0607 | 7.81 | 2600 | 0.3092 | 0.9362 |
| 0.111 | 8.11 | 2700 | 0.3146 | 0.9394 |
| 0.0588 | 8.41 | 2800 | 0.3069 | 0.9341 |
| 0.131 | 8.71 | 2900 | 0.2971 | 0.9405 |
| 0.1903 | 9.01 | 3000 | 0.3078 | 0.9384 |
| 0.2116 | 9.31 | 3100 | 0.3112 | 0.9341 |
| 0.1415 | 9.61 | 3200 | 0.2956 | 0.9405 |
| 0.1106 | 9.91 | 3300 | 0.2910 | 0.9437 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| f43bf5e6d3dc2b783f6fe6880e646862 |
anas-awadalla/t5-base-few-shot-k-1024-finetuned-squad-seed-0 | anas-awadalla | t5 | 17 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 958 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| bc0a9a526993a9697ad9b5efa6c80da0 |
l3cube-pune/mr-random-twt-1m | l3cube-pune | bert | 8 | 2 | transformers | 0 | fill-mask | true | false | false | cc-by-4.0 | ['mr'] | ['L3Cube-MahaCorpus'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 615 | false | A MahaBERT (l3cube-pune/marathi-bert-v2) model finetuned on random 1 million Marathi Tweets.
More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2210.04267'> link </a>)
Released under project: https://github.com/l3cube-pune/MarathiNLP
```
@article{gokhale2022spread,
title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection},
author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj},
journal={arXiv preprint arXiv:2210.04267},
year={2022}
}
```
| 96a3629e356ccf4fc593ef8eacfb3415 |
henryscheible/eval_masked_102_wnli | henryscheible | null | 13 | 0 | null | 0 | null | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,011 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_102_wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7126
- Accuracy: 0.3239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| ccb22cce7fa44d41cbd0b3b4422379e0 |
sd-concepts-library/ic0n | sd-concepts-library | null | 12 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,258 | false | ### ic0n on Stable Diffusion
This is the `<ic0n>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<ic0n> 0](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/5.jpeg)
![<ic0n> 1](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/6.jpeg)
![<ic0n> 2](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/3.jpeg)
![<ic0n> 3](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/0.jpeg)
![<ic0n> 4](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/2.jpeg)
![<ic0n> 5](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/1.jpeg)
![<ic0n> 6](https://huggingface.co/sd-concepts-library/ic0n/resolve/main/concept_images/4.jpeg)
| b8320e809fbc601df2fde3a0c4247ab7 |
Duskfallcrew/sailor-moon-mix | Duskfallcrew | null | 21 | 23 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,138 | false | ### Sailor Moon Mix Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Concepts & More details: https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
Do not sell merges or this model
Do not let it bite you after midnight
Rini might have rabies in this we haven't tested her.
OVERALL: HAVE FUN
anisenti88 (use that on your prompt)
| 18906016c311f9a3ca8cff2591a3a153 |
huggan/fastgan-few-shot-aurora | huggan | null | 8 | 0 | null | 0 | unconditional-image-generation | true | false | false | mit | null | ['huggan/few-shot-aurora'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['huggan', 'gan', 'unconditional-image-generation'] | false | true | true | 1,864 | false |
# Generate aurora image using FastGAN
## Model description
[FastGAN model](https://arxiv.org/abs/2101.04775) is a Generative Adversarial Networks (GAN) training on a small amount of high-fidelity images with minimum computing cost. Using a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder, the model was able to converge after some hours of training for either 100 high-quality images or 1000 images datasets.
This model was trained on a dataset of 272 high-quality images of aurora.
#### How to use
```python
# Clone this model
git clone https://huggingface.co/huggan/fastgan-few-shot-aurora/
def load_generator(model_name_or_path):
generator = Generator(in_channels=256, out_channels=3)
generator = generator.from_pretrained(model_name_or_path, in_channels=256, out_channels=3)
_ = generator.eval()
return generator
def _denormalize(input: torch.Tensor) -> torch.Tensor:
return (input * 127.5) + 127.5
# Load generator
generator = load_generator("huggan/fastgan-few-shot-aurora")
# Generate a random noise image
noise = torch.zeros(1, 256, 1, 1, device=device).normal_(0.0, 1.0)
with torch.no_grad():
gan_images, _ = generator(noise)
gan_images = _denormalize(gan_images.detach())
save_image(gan_images, "sample.png", nrow=1, normalize=True)
```
#### Limitations and bias
* Converge faster and better with small datasets (less than 1000 samples)
## Training data
[few-shot-aurora](https://huggingface.co/datasets/huggan/few-shot-aurora)
## Generated Images
![Example image](example.png)
### BibTeX entry and citation info
```bibtex
@article{FastGAN,
title={Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis},
author={Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal},
journal={ICLR},
year={2021}
}
``` | c9b1b889a99f9b4d954ec46d3550c490 |
Fredium/wav2vec2-large-xlsr-53-demo-colab-telugu_new | Fredium | wav2vec2 | 11 | 15 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['openslr'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,044 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab-telugu_new
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the openslr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.7.1
- Tokenizers 0.13.2
| c7a7446c9ae050d9f88c457fc3bf05a2 |
gustavecortal/distilcamembert-cae-no-feeling | gustavecortal | camembert | 6 | 3 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,676 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilcamembert-cae-no-feeling
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7566
- Precision: 0.8160
- Recall: 0.7975
- F1: 0.7998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 1.174 | 1.0 | 40 | 0.9568 | 0.4398 | 0.4937 | 0.3881 |
| 0.8722 | 2.0 | 80 | 0.7551 | 0.7716 | 0.7215 | 0.7120 |
| 0.5255 | 3.0 | 120 | 0.7144 | 0.7533 | 0.6962 | 0.6937 |
| 0.3102 | 4.0 | 160 | 0.6734 | 0.7711 | 0.7595 | 0.7627 |
| 0.1804 | 5.0 | 200 | 0.7566 | 0.8160 | 0.7975 | 0.7998 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| 795eb52107e8670850c5f7db75bb3e25 |
timm/convnext_base.fb_in22k_ft_in1k | timm | null | 4 | 2,614 | timm | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet-1k', 'imagenet-22k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification', 'timm'] | false | true | true | 21,429 | false | # Model card for convnext_base.fb_in22k_ft_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.6
- GMACs: 15.4
- Activations (M): 28.8
- Image size: 224 x 224
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnext_base.fb_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_base.fb_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for convnext_base:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_base.fb_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
### By Throughput (samples / sec)
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
| f6344b6e524d734b45ee8c8d8c444056 |
Helsinki-NLP/opus-mt-ja-pl | Helsinki-NLP | marian | 11 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | ['ja', 'pl'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,113 | false |
### jpn-pol
* source group: Japanese
* target group: Polish
* OPUS readme: [jpn-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-pol/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.pol | 15.7 | 0.386 |
### System Info:
- hf_name: jpn-pol
- source_languages: jpn
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'pl']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: pol
- short_pair: ja-pl
- chrF2_score: 0.386
- bleu: 15.7
- brevity_penalty: 1.0
- ref_len: 69904.0
- src_name: Japanese
- tgt_name: Polish
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: pl
- prefer_old: False
- long_pair: jpn-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 8a950739f091d0029890abec6a909e47 |
KIZervus/KIZervus | KIZervus | distilbert | 9 | 1 | transformers | 1 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,678 | false |
# KIZervus
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased).
It is trained to classify german text into the classes "vulgar" speech and "non-vulgar" speech.
The data set is a collection of other labeled sources in german. For an overview, see the github repository here: https://github.com/NKDataConv/KIZervus
Both data and training procedure are documented in the GitHub repo. Your are welcome to contribute.
It achieves the following results on the evaluation set:
- Train Loss: 0.4640
- Train Accuracy: 0.7744
- Validation Loss: 0.4852
- Validation Accuracy: 0.7937
- Epoch: 1
## Training procedure
For details, see the repo and documentation here: https://github.com/NKDataConv/KIZervus
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 822, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4830 | 0.7617 | 0.5061 | 0.7406 | 0 |
| 0.4640 | 0.7744 | 0.4852 | 0.7937 | 1 |
### Framework versions
- Transformers 4.21.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
### Supporter
![BMBF Logo](./BMBF_Logo.png)
| c22c663a202467993bb9e791d8f2b813 |
jhonparra18/roberta-base-cv-studio_name-medium | jhonparra18 | roberta | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 813 | false | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-cv-studio_name-medium
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
Predicts a studio name based on a CV text
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 10
### Framework versions
- Transformers 4.19.0
- Pytorch 1.8.2+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
| 115158982961a28fc4dee3def83083da |
jonatasgrosman/exp_w2v2t_es_wavlm_s655 | jonatasgrosman | wavlm | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 439 | false | # exp_w2v2t_es_wavlm_s655
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 06393096340b6acf690661d110b3fc63 |
shreyasgite/wav2vec2-large-xls-r-300m-dm32 | shreyasgite | wav2vec2 | 11 | 2 | transformers | 0 | null | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,827 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dm32
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5688
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 22
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 2.41 | 34 | 0.6769 | 0.6458 |
| No log | 4.83 | 68 | 0.6864 | 0.5208 |
| No log | 7.28 | 102 | 0.6596 | 0.6042 |
| 0.7106 | 9.69 | 136 | 0.6208 | 0.6875 |
| 0.7106 | 12.14 | 170 | 0.6152 | 0.6875 |
| 0.7106 | 14.55 | 204 | 0.6167 | 0.6875 |
| 0.6464 | 16.97 | 238 | 0.5782 | 0.7708 |
| 0.6464 | 19.41 | 272 | 0.6011 | 0.7292 |
| 0.6464 | 21.83 | 306 | 0.5688 | 0.7917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| e2bf159c56eb16da6672900bfd2314ac |
baru98/bert-base-cased-finetuned-squad | baru98 | bert | 14 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,255 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 5.7012 |
| No log | 2.0 | 14 | 5.5021 |
| No log | 3.0 | 21 | 5.4212 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| f7a3f3a2a92e0b53b7322777a09425e0 |
muhtasham/bert-tiny-target-cola | muhtasham | bert | 15 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,874 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-target-cola
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6626
- Matthews Correlation: 0.1458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6124 | 1.87 | 500 | 0.6192 | 0.0 |
| 0.6016 | 3.73 | 1000 | 0.6167 | 0.0 |
| 0.5838 | 5.6 | 1500 | 0.6166 | 0.0149 |
| 0.5555 | 7.46 | 2000 | 0.6344 | 0.0465 |
| 0.5272 | 9.33 | 2500 | 0.6542 | 0.1399 |
| 0.5058 | 11.19 | 3000 | 0.6626 | 0.1458 |
| 0.4791 | 13.06 | 3500 | 0.6868 | 0.1192 |
| 0.4577 | 14.93 | 4000 | 0.7215 | 0.1230 |
| 0.4425 | 16.79 | 4500 | 0.7322 | 0.1243 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 86e029e8a44a9da9225f017e27fcebc6 |
jonatasgrosman/exp_w2v2t_zh-cn_vp-100k_s131 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['zh-CN'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'zh-CN'] | false | true | true | 481 | false | # exp_w2v2t_zh-cn_vp-100k_s131
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 67c7bf48719d12603fb7771061f40bb9 |
deepdml/whisper-small-eu | deepdml | whisper | 18 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['eu'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,557 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small Basque-Euskera
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4485
- Wer: 19.7663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.048 | 4.04 | 1000 | 0.3402 | 21.7816 |
| 0.0047 | 9.03 | 2000 | 0.3862 | 20.1694 |
| 0.0012 | 14.02 | 3000 | 0.4221 | 19.7419 |
| 0.0008 | 19.02 | 4000 | 0.4411 | 19.7174 |
| 0.0006 | 24.01 | 5000 | 0.4485 | 19.7663 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 9bebd6956bafa8d2c4407ee7a06a9e90 |
luigisaetta/whisper-atco2-large | luigisaetta | whisper | 22 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,588 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6750
- Wer: 16.9811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1978 | 2.01 | 100 | 0.5474 | 21.0692 |
| 0.0087 | 4.02 | 200 | 0.6202 | 19.4969 |
| 0.0029 | 6.04 | 300 | 0.6264 | 18.2390 |
| 0.0003 | 8.05 | 400 | 0.6659 | 17.1908 |
| 0.0002 | 10.06 | 500 | 0.6750 | 16.9811 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 36bf922003807cf16af959fc82304fb6 |
sd-concepts-library/franz-unterberger | sd-concepts-library | null | 28 | 0 | null | 6 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,442 | false | ### Franz Unterberger on Stable Diffusion
This is the `<franz-unterberger>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<franz-unterberger> 0](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/20.jpeg)
![<franz-unterberger> 1](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/6.jpeg)
![<franz-unterberger> 2](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/1.jpeg)
![<franz-unterberger> 3](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/18.jpeg)
![<franz-unterberger> 4](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/14.jpeg)
![<franz-unterberger> 5](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/8.jpeg)
![<franz-unterberger> 6](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/4.jpeg)
![<franz-unterberger> 7](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/19.jpeg)
![<franz-unterberger> 8](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/13.jpeg)
![<franz-unterberger> 9](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/22.jpeg)
![<franz-unterberger> 10](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/7.jpeg)
![<franz-unterberger> 11](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/17.jpeg)
![<franz-unterberger> 12](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/3.jpeg)
![<franz-unterberger> 13](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/11.jpeg)
![<franz-unterberger> 14](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/9.jpeg)
![<franz-unterberger> 15](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/0.jpeg)
![<franz-unterberger> 16](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/10.jpeg)
![<franz-unterberger> 17](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/15.jpeg)
![<franz-unterberger> 18](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/5.jpeg)
![<franz-unterberger> 19](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/21.jpeg)
![<franz-unterberger> 20](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/2.jpeg)
![<franz-unterberger> 21](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/16.jpeg)
![<franz-unterberger> 22](https://huggingface.co/sd-concepts-library/franz-unterberger/resolve/main/concept_images/12.jpeg)
| 314d7554ec262903a9da9d735aa4cbcb |
clara-dumont/wav2vec2-base-timit-eng | clara-dumont | wav2vec2 | 12 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,984 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-eng
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5195
- Wer: 0.3418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5159 | 1.0 | 500 | 1.7153 | 1.0291 |
| 0.8502 | 2.01 | 1000 | 0.5204 | 0.5146 |
| 0.431 | 3.01 | 1500 | 0.4491 | 0.4537 |
| 0.3073 | 4.02 | 2000 | 0.3883 | 0.4190 |
| 0.2338 | 5.02 | 2500 | 0.4453 | 0.4230 |
| 0.1956 | 6.02 | 3000 | 0.4599 | 0.3981 |
| 0.1594 | 7.03 | 3500 | 0.4240 | 0.3916 |
| 0.1423 | 8.03 | 4000 | 0.4756 | 0.3975 |
| 0.1252 | 9.04 | 4500 | 0.4427 | 0.3827 |
| 0.1064 | 10.04 | 5000 | 0.4489 | 0.3809 |
| 0.101 | 11.04 | 5500 | 0.4531 | 0.3961 |
| 0.0877 | 12.05 | 6000 | 0.4881 | 0.3883 |
| 0.0817 | 13.05 | 6500 | 0.5023 | 0.3774 |
| 0.0703 | 14.06 | 7000 | 0.5078 | 0.3679 |
| 0.0663 | 15.06 | 7500 | 0.5279 | 0.3620 |
| 0.0584 | 16.06 | 8000 | 0.5112 | 0.3653 |
| 0.0579 | 17.07 | 8500 | 0.4959 | 0.3633 |
| 0.0572 | 18.07 | 9000 | 0.4676 | 0.3626 |
| 0.0502 | 19.08 | 9500 | 0.5216 | 0.3503 |
| 0.0432 | 20.08 | 10000 | 0.4946 | 0.3480 |
| 0.0417 | 21.08 | 10500 | 0.4949 | 0.3532 |
| 0.0335 | 22.09 | 11000 | 0.5485 | 0.3557 |
| 0.032 | 23.09 | 11500 | 0.5087 | 0.3464 |
| 0.0334 | 24.1 | 12000 | 0.5313 | 0.3498 |
| 0.0263 | 25.1 | 12500 | 0.5148 | 0.3457 |
| 0.0242 | 26.1 | 13000 | 0.5232 | 0.3442 |
| 0.0235 | 27.11 | 13500 | 0.5122 | 0.3418 |
| 0.0221 | 28.11 | 14000 | 0.5074 | 0.3407 |
| 0.0215 | 29.12 | 14500 | 0.5195 | 0.3418 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
| a15705f41401ff5c4c89d627e3608691 |
Helsinki-NLP/opus-mt-sv-tll | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-sv-tll
* source languages: sv
* target languages: tll
* OPUS readme: [sv-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.tll | 24.9 | 0.484 |
| b38b306c0d56850f832273098da111af |
Someman/gpt2-nepali | Someman | gpt2 | 14 | 2 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,247 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-nepali
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9488 | 0.68 | 5000 | 1.5058 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 286bac3148078316d0a15b2e47e74182 |
svalabs/infoxlm-german-question-answering | svalabs | xlm-roberta | 8 | 1,080 | transformers | 1 | question-answering | true | false | false | cc-by-4.0 | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering', 'pytorch', 'qa', 'de'] | false | true | true | 859 | false |
# SVALabs - InfoXLM German QA
In this repository we present our german question answering model.
The trained model is based on [InfoXLM-large](https://huggingface.co/microsoft/infoxlm-large) and was finetuned using the [SQuAD](https://huggingface.co/datasets/squad) dataset and the [GermanQuAD](https://huggingface.co/datasets/deepset/germanquad) dataset.
### Model Details
| | Description or Link |
|---|---|
|**Base model** | [```InfoXLM-large```](https://huggingface.co/microsoft/infoxlm-large) |
|**Finetuning task**| Question Answering |
|**Source datasets**| [```SQuAD```](https://huggingface.co/datasets/squad); [```GermanQuAD```](https://huggingface.co/datasets/deepset/germanquad)|
### Performance
The model was tested on 1692 samples of the GermanQuAD test dataset (the other samples were used for validation)
F1-Score: 89.05
EM: 74.76 | 11fcbe7badd2f3729d454f5ec27a903b |
gngpostalsrvc/BERiT_14500 | gngpostalsrvc | roberta | 11 | 7 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,840 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_14500
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.3825 | 0.19 | 500 | 8.3006 |
| 8.2426 | 0.39 | 1000 | 8.2751 |
| 8.1622 | 0.58 | 1500 | 8.2504 |
| 8.1673 | 0.77 | 2000 | 8.1935 |
| 8.1597 | 0.97 | 2500 | 8.1928 |
| 8.0644 | 1.16 | 3000 | 8.1111 |
| 8.0724 | 1.36 | 3500 | 8.0820 |
| 8.0654 | 1.55 | 4000 | 8.0655 |
| 8.0649 | 1.74 | 4500 | 8.0896 |
| 8.051 | 1.94 | 5000 | 8.0838 |
| 8.0003 | 2.13 | 5500 | 8.0989 |
| 7.9795 | 2.32 | 6000 | 8.0729 |
| 7.9984 | 2.52 | 6500 | 8.0566 |
| 7.9935 | 2.71 | 7000 | 8.0757 |
| 7.9652 | 2.9 | 7500 | 8.0316 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| d69dc825b6a503177dd0f9e07b702f74 |
Helsinki-NLP/opus-mt-nyk-en | Helsinki-NLP | marian | 10 | 11 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-nyk-en
* source languages: nyk
* target languages: en
* OPUS readme: [nyk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nyk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nyk.en | 27.3 | 0.423 |
| 9aeb091bba4a6a45b0cb72b51897315d |
theojolliffe/bart-model2-1409 | theojolliffe | bart | 12 | 0 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,269 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-model2-1409
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 434 | 0.0866 | 65.9374 | 57.1515 | 64.1273 | 64.1253 | 19.787 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.10.3
| ead333e6fdeb3a2f9f78cacc9e4a137f |
learningdude/wav2vec2-base-finetuned-ks | learningdude | wav2vec2 | 10 | 3 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | null | ['superb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,560 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0834
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6111 | 1.0 | 399 | 0.5123 | 0.9388 |
| 0.2901 | 2.0 | 798 | 0.1725 | 0.9782 |
| 0.1916 | 3.0 | 1197 | 0.1060 | 0.9834 |
| 0.1754 | 4.0 | 1596 | 0.0891 | 0.9829 |
| 0.1384 | 5.0 | 1995 | 0.0834 | 0.9840 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 1.14.0
- Tokenizers 0.12.1
| f8bad01ae8cbe9f660fdc6c3799b988f |
felipeace96/cleaner-restaurant-names | felipeace96 | t5 | 9 | 29 | transformers | 0 | text2text-generation | true | false | false | unknown | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 758 | false |
# Cleaner Names Model
This model was trained using information from the google places API (10K+).
These restaurants are from countries like Mexico and Colombia, so spanish is principal language.
The idea is remove automatically the stopwords in the name.
The model was trained using an specific prompt, you will have to adapt the name in this way:
"""
REGULAR NAME: {name}.
CLEANED NAME:
"""
For better results use the name in upper case.
Examples:
- POB MCDONALDS POBLADO HAMBURGUESA
- BURGER KING SHOPPING RECIFE 15883
- CENTRO COMERCIAL GRAN ESTACION, DOMINOS PIZZA
Using the prompt:
REGULAR NAME: CENTRO COMERCIAL GRAN ESTACION, DOMINOS PIZZA.
CLEANED NAME:
On the Eval Dataset the Bleu Score is higher than 0.7. Its good, but can be better.
| 0a20b728443a5400aec513856697b0d7 |
Helsinki-NLP/opus-mt-es-ln | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-es-ln
* source languages: es
* target languages: ln
* OPUS readme: [es-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ln | 27.1 | 0.508 |
| 7a50a1474a0584a5154a8b10bc27c3e8 |
tuwonga/supermarionation | tuwonga | null | 25 | 52 | diffusers | 31 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 3 | 0 | 3 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,932 | false | ### supermarionation
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from Gerry Anderson **_Supermarionation_** stop motion animation movie, basically from **_Thunderbirds_** tv series. Use the token **_supermarionation_** in your prompts to use the style.
_Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
_I've found interesting (and really funny ^^) the output in the img2img. You can see the results in the second and third pic (original/img2img). You can play around with denoising strength (40-70) and activate or not the restore face option._
### supermarionation v2
In this version I've trained characters and vehicles. 47 images and 9400 steps, 20% text encoder.
-- **Characters and vehicles rendered with this model:**
![Character Samples](https://huggingface.co/tuwonga/supermarionation/resolve/main/supermarionation_v2_prev1.jpg)
_prompt and settings used: **[person/vehicle] in supermarionation style** | **Steps: 30, Sampler: Euler, CFG scale: 7.5**_
**Characters rendered with img2img:**
![Character Samples](https://huggingface.co/tuwonga/supermarionation/resolve/main/supermarionation_v2_prev2.jpg)
_prompt and settings used: **[person] in supermarionation style** | **Steps: 30 - you can play around with settings**_
**Characters rendered with supermarionation in txt2img:**
![Character Samples](https://huggingface.co/tuwonga/supermarionation/resolve/main/supermarionation_prev1.jpg)
_prompt and settings used: **[person] in supermarionation style** | **Steps: 40 - you can play around with settings**_
**Characters rendered with supermarionation in img2img:**
![Character Samples](https://huggingface.co/tuwonga/supermarionation/resolve/main/supermarionation_prev2.jpg)
_prompt and settings used: **[person] in supermarionation style** | **Steps: 40 - you can play around with settings**_
--
Supermarionation v1 was trained with Dreambooth training by TheLastBen, using 43 images at 8600 steps with 18% of text encoder.
--
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | b2e048b8e841980edbfcae66003be3be |
shed-e/testgen | shed-e | bart | 15 | 6 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 919 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testgen
This model is a fine-tuned version of [GanjinZero/biobart-v2-base](https://huggingface.co/GanjinZero/biobart-v2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.0
- Tokenizers 0.13.2
| 8a7879547acfcd771dcf5adb5a282028 |
Helsinki-NLP/opus-mt-tc-big-gmq-he | Helsinki-NLP | marian | 13 | 3 | transformers | 0 | translation | true | true | false | cc-by-4.0 | ['da', 'he', 'sv'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 7,392 | false | # opus-mt-tc-big-gmq-he
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from North Germanic languages (gmq) to Hebrew (he).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2022-07-28
- **License:** CC-BY-4.0
- **Language(s):**
- Source Language(s): dan nor swe
- Target Language(s): heb
- Language Pair(s): dan-heb swe-heb
- Valid Target Language Labels:
- **Original Model**: [opusTCv20210807_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.zip)
- **Resources for more information:**
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- More information about released models for this language pair: [OPUS-MT gmq-heb README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-heb/README.md)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Alle L.L. Zamenhofs tre bรธrn blev myrdet i holocausten.",
"Tom visade sig vara spion."
]
model_name = "pytorch-models/opus-mt-tc-big-gmq-he"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# ืื ืฉืืืฉืช ืืืืืื ืฉื ืื-ืื ืืืื ืืืฃ ื ืจืฆืื ืืฉืืื.
# ืืกืชืืจ ืฉืืื ืืื ืืจืื.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-gmq-he")
print(pipe("Alle L.L. Zamenhofs tre bรธrn blev myrdet i holocausten."))
# expected output: ืื ืฉืืืฉืช ืืืืืื ืฉื ืื-ืื ืืืื ืืืฃ ื ืจืฆืื ืืฉืืื.
```
## Training
- **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* test set translations: [opusTCv20210807_transformer-big_2022-07-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.test.txt)
* test set scores: [opusTCv20210807_transformer-big_2022-07-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-heb/opusTCv20210807_transformer-big_2022-07-28.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| dan-heb | flores101-devtest | 0.52815 | 22.9 | 1012 | 20749 |
| isl-heb | flores101-devtest | 0.42284 | 14.2 | 1012 | 20749 |
| nob-heb | flores101-devtest | 0.49492 | 19.2 | 1012 | 20749 |
| swe-heb | flores101-devtest | 0.52408 | 23.0 | 1012 | 20749 |
## Citation Information
* Publications: [OPUS-MT โ Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge โ Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unionโs Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unionโs Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 8b9f0b0
* port time: Sat Aug 13 00:03:50 EEST 2022
* port machine: LM0-400-22516.local
| f62911982094fdfc5078c0d617e7fea8 |