repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4eJIoBek/ruGPT3_small_nujdiki_stage1 | 4eJIoBek | gpt2 | 11 | 8 | transformers | 1 | text-generation | true | false | false | openrail | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 729 | false | это файнтюн sberai ruGPT3 small (125 млн параметров) на всех оригинальных нуждиках (2 часа транскрибированные через openai whisper medium). размер блока при файнтюне 1024, 25 эпох. все скрипты по инференсу модели тут https://github.com/ai-forever/ru-gpts, через transformers вполне себе работает на 4 гб видеопамяти, на 2 думаю тоже заработает.
-как запустить через transformers?
запускаем строки ниже в jupyterе
from transformers import pipeline, set_seed
set_seed(32)
generator = pipeline('text-generation', model="4eJIoBek/ruGPT3_small_nujdiki_fithah", do_sample=True, max_length=350)
generator("Александр Сергеевич Пушкин известен также благодаря своим сказкам, которые включают в себя: ")
и всё работает и вообще нихуёво | 868d57f4bea5f1dca80e21be31c90e24 |
excode/distilbert-amazon-shoe-reviews_ubuntu | excode | distilbert | 10 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,957 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews_ubuntu
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9573
- Accuracy: 0.5726
- F1: [0.62998761 0.45096564 0.49037037 0.55640244 0.73547094]
- Precision: [0.62334478 0.45704118 0.47534706 0.5858748 0.72102161]
- Recall: [0.63677355 0.4450495 0.5063743 0.52975327 0.75051125]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
| 0.9617 | 1.0 | 2813 | 0.9573 | 0.5726 | [0.62998761 0.45096564 0.49037037 0.55640244 0.73547094] | [0.62334478 0.45704118 0.47534706 0.5858748 0.72102161] | [0.63677355 0.4450495 0.5063743 0.52975327 0.75051125] |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
| 7ec0ea90c1b4aa6f5c8a45e9d5b00a82 |
Ilyes/wav2vec2-large-xlsr-53-french | Ilyes | wav2vec2 | 11 | 1,033 | transformers | 2 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['common_voice'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | true | true | true | 1,876 | false | ## Evaluation on Common Voice FR Test
The script used for training and evaluation can be found here: https://github.com/irebai/wav2vec2
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import re
model_name = "Ilyes/wav2vec2-large-xlsr-53-french"
device = "cpu" # "cuda"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "fr", split="test", cache_dir="./data/fr")
chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“\%\‘\”\�\‘\’\’\’\‘\…\·\!\ǃ\?\«\‹\»\›“\”\\ʿ\ʾ\„\∞\\|\.\,\;\:\*\—\–\─\―\_\/\:\ː\;\,\=\«\»\→]'
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
resampler = torchaudio.transforms.Resample(48_000, 16_000)
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
## Results
WER=12.82%
CER=4.40%
| a021ceea6006b0b46e50555ec223a87f |
Priya9/wav2vec2-large-xls-r-300m-turkish-colab | Priya9 | wav2vec2 | 13 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,431 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3859
- Wer: 0.4680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8707 | 3.67 | 400 | 0.6588 | 0.7110 |
| 0.3955 | 7.34 | 800 | 0.3859 | 0.4680 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 865cac1fa1dbd9ac57afea315283c57d |
mariolinml/roberta_large-chunk-conll2003_0818_v0 | mariolinml | roberta | 14 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,443 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_large-chunk-conll2003_0818_v0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1566
- Precision: 0.9016
- Recall: 0.9295
- F1: 0.9154
- Accuracy: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2179 | 1.0 | 878 | 0.0527 | 0.9210 | 0.9472 | 0.9339 | 0.9875 |
| 0.0434 | 2.0 | 1756 | 0.0455 | 0.9366 | 0.9616 | 0.9489 | 0.9899 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| b6a7f286ba32c7c3f1cadf8b6f15c42a |
jonatasgrosman/exp_w2v2t_id_vp-fr_s335 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'id'] | false | true | true | 469 | false | # exp_w2v2t_id_vp-fr_s335
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| f66a49e9699d2c73698c18ace126e9f5 |
merve/tips9y0jvt5q-tip-regression | merve | null | 4 | 0 | sklearn | 0 | tabular-regression | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['tabular-regression', 'baseline-trainer'] | false | true | true | 8,010 | false |
## Baseline Model trained on tips9y0jvt5q to apply regression on tip
**Metrics of the best model:**
r2 0.415240
neg_mean_squared_error -1.098792
Name: Ridge(alpha=10), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
total_bill True False False ... False False False
sex False False False ... False False False
smoker False False False ... False False False
day False False False ... False False False
time False False False ... False False False
size False False False ... False False False[6 rows x 7 columns])),('ridge', Ridge(alpha=10))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
total_bill True False False ... False False False
sex False False False ... False False False
smoker False False False ... False False False
day False False False ... False False False
time False False False ... False False False
size False False False ... False False False[6 rows x 7 columns])),('ridge', Ridge(alpha=10))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
total_bill True False False ... False False False
sex False False False ... False False False
smoker False False False ... False False False
day False False False ... False False False
time False False False ... False False False
size False False False ... False False False[6 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">Ridge</label><div class="sk-toggleable__content"><pre>Ridge(alpha=10)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt | 70f42b01ecb59023599123336131071a |
kadirnar/yolov8n-v8.0 | kadirnar | null | 3 | 0 | null | 0 | object-detection | false | false | false | gpl-3.0 | null | ['detection-datasets/coco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'computer-vision', 'yolov8', 'yolov5'] | false | true | true | 457 | false |
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite]
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('kadirnar/yolov8n-v8.0')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
```
### BibTeX Entry and Citation Info
```
``` | bd921d8af4d292e0430cc5a3bef1006d |
Lorenzo1708/IA_Trabalho01 | Lorenzo1708 | distilbert | 13 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,031 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IA_Trabalho01
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2717
- Accuracy: 0.8990
- F1: 0.8987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 939dbc7cdb8fc9fb94625f8ce8e11de1 |
p1atdev/plat-diffusion | p1atdev | null | 21 | 862 | diffusers | 60 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 5 | 0 | 5 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers'] | false | true | true | 4,654 | false |
# Plat Diffusion v1.3.1
Plat Diffusion v1.3.1 is a latent model fine-tuned on [Waifu Diffusion v1.4 Anime Epoch 2](https://huggingface.co/hakurei/waifu-diffusion-v1-4) with images from niji・journey and some generative AI.
`.safetensors` file is [here](https://huggingface.co/p1atdev/pd-archive/tree/main).
[kl-f8-anime2.ckpt](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt) is recommended for VAE.
![eyecatch](https://s3.amazonaws.com/moonup/production/uploads/1674263031698-6305db1fcfbde33ef7d480ff.png)
### Recomended Negative Prompt
```
nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry
```
# Samples
![sample1](https://huggingface.co/p1atdev/plat-diffusion/resolve/main/samples/sample1.jpg)
```
masterpiece, best quality, high quality, 1girl, solo, brown hair, green eyes, looking at viewer, autumn, cumulonimbus cloud, lighting, blue sky, autumn leaves, garden, ultra detailed illustration, intricate detailed
Negative prompt: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
---
(This model is not good at male, sorry)
![sample2](https://huggingface.co/p1atdev/plat-diffusion/resolve/main/samples/sample2.jpg)
```
masterpiece, best quality, high quality, 1boy, man, male, brown hair, green eyes, looking at viewer, autumn, cumulonimbus cloud, lighting, blue sky, autumn leaves, garden, ultra detailed illustration, intricate detailed
Negative prompt: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
---
![sample3](https://huggingface.co/p1atdev/plat-diffusion/resolve/main/samples/sample3.jpg)
```
masterpiece, best quality, 1girl, pirate, gloves, parrot, bird, looking at viewer,
Negative prompt: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry, 3d
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
---
![sample4](https://huggingface.co/p1atdev/plat-diffusion/resolve/main/samples/sample4.jpg)
```
masterpiece, best quality, scenery, starry sky, mountains, no humans
Negative prompt: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7
```
# 🧨 Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "p1atdev/plat-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float32) # not working with float16
pipe = pipe.to("cuda")
prompt = "masterpiece, best quality, 1girl, solo, short hair, looking at viewer, japanese clothes, blue hair, portrait, kimono, bangs, colorful, closed mouth, blue kimono, butterfly, blue eyes, ultra detailed illustration"
negative_prompt = "nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry"
image = pipe(prompt, negative_prompt=negative_prompt).images[0]
image.save("girl.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| 647e08a6d73b786a134ffa68e1a1264d |
jonatasgrosman/exp_w2v2t_pl_unispeech-sat_s695 | jonatasgrosman | unispeech-sat | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pl'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'pl'] | false | true | true | 463 | false | # exp_w2v2t_pl_unispeech-sat_s695
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 27086aac38b6fab87559b361c8ff3462 |
joachimsallstrom/Double-Exposure-Diffusion | joachimsallstrom | null | 24 | 572 | diffusers | 122 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 4 | 1 | 3 | 0 | 5 | 5 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,837 | false |
[*Click here to download the latest Double Exposure embedding for SD 2.x in higher resolution*](https://huggingface.co/joachimsallstrom/Double-Exposure-Embedding)!
**Double Exposure Diffusion**
This is version 2 of the <i>Double Exposure Diffusion</i> model, trained specifically on images of people and a few animals.
The model file (Double_Exposure_v2.ckpt) can be downloaded on the **Files** page. You trigger double exposure style images using token: **_dublex style_** or just **_dublex_**.
**Example 1:**
![Sample 1](https://huggingface.co/joachimsallstrom/double-exposure-style/resolve/main/v2_sample_images_1.jpg)
#### Example prompts and settings
<i>Galaxy man (image 1):</i><br>
**dublex man galaxy**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3273014177, Size: 512x512_
<i>Emma Stone (image 2):</i><br>
**dublex style Emma Stone, galaxy**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 250257155, Size: 512x512_
<i>Frodo (image 6):</i><br>
**dublex style young Elijah Wood as (Frodo), portrait, dark nature**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3717002975, Size: 512x512_
<br>
**Example 2:**
![Sample 1](https://huggingface.co/joachimsallstrom/double-exposure-style/resolve/main/v2_sample_images_2.jpg)
#### Example prompts and settings
<i>Scarlett Johansson (image 1):</i><br>
**dublex Scarlett Johansson, (haunted house), black background**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3059560186, Size: 512x512_
<i>Frozen Elsa (image 3):</i><br>
**dublex style Elsa, ice castle**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2867934627, Size: 512x512_
<i>Wolf (image 4):</i><br>
**dublex style wolf closeup, moon**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 312924946, Size: 512x512_
<br>
<p>
This model was trained using Shivam's DreamBooth model on Google Colab @ 2000 steps.
</p>
The previous version 1 of Double Exposure Diffusion is also available in the **Files** section.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 2c97db32cb1cc0bbd8fa75aafd9b5857 |
Helsinki-NLP/opus-mt-sv-tpi | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-sv-tpi
* source languages: sv
* target languages: tpi
* OPUS readme: [sv-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tpi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tpi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tpi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.tpi | 31.4 | 0.513 |
| f0f623be0139e45162c7c9963e2006f9 |
PlanTL-GOB-ES/roberta-large-bne-sqac | PlanTL-GOB-ES | roberta | 9 | 1,465 | transformers | 2 | question-answering | true | false | false | apache-2.0 | ['es'] | ['PlanTL-GOB-ES/SQAC'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['national library of spain', 'spanish', 'bne', 'qa', 'question answering'] | true | true | true | 7,041 | false |
# Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-large-bne-sqac** is a Question Answering (QA) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
**roberta-large-bne-sqac** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
```python
from transformers import pipeline
nlp = pipeline("question-answering", model="PlanTL-GOB-ES/roberta-large-bne-sqac")
text = "¿Dónde vivo?"
context = "Me llamo Wolfgang y vivo en Berlin"
qa_results = nlp(text, context)
print(qa_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the QA dataset in Spanish called [SQAC corpus](https://huggingface.co/datasets/PlanTL-GOB-ES/SQAC) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation results
We evaluated the **roberta-large-bne-sqac** on the SQAC test set against standard multilingual and monolingual baselines:
| Model | SQAC (F1) |
| ------------|:----|
| roberta-large-bne-sqac | **82.02** |
| roberta-base-bne-sqac | 79.23|
| BETO | 79.23 |
| mBERT | 75.62 |
| BERTIN | 76.78 |
| ELECTRA | 73.83 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to <plantl-gob-es@bsc.es>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. | f8a4282640a750039eb6a8abebf66897 |
echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid | echarlaix | bert | 87 | 668 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['sst2'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['text-classification'] | false | true | true | 2,839 | false |
## bert-base-uncased model fine-tuned on SST-2
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **37%** of the original weights.
The model contains **51%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
<div class="graph"><script src="/echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid/raw/main/model_card/density_info.js" id="2d0fc334-fe98-4315-8890-d6eaca1fa9be"></script></div>
In terms of perfomance, its **accuracy** is **91.17**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on task, and distilled from the model [textattack/bert-base-uncased-SST-2](https://huggingface.co/textattack/bert-base-uncased-SST-2).
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning method is that some of the attention heads are completely removed: 88 heads were removed on a total of 144 (61.1%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid/raw/main/model_card/pruning_info.js" id="93b19d7f-c11b-4edf-9670-091e40d9be25"></script></div>
## Details of the SST-2 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SST-2 | train | 67K |
| SST-2 | eval | 872 |
### Results
**Pytorch model file size**: `351MB` (original BERT: `420MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **accuracy** | **91.17** | **92.7** | **-1.53**|
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
cls_pipeline = pipeline(
"text-classification",
model="echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid",
tokenizer="echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid",
)
print(f"Parameters count (includes only head pruning, no feed forward pruning)={int(cls_pipeline.model.num_parameters() / 1E6)}M")
cls_pipeline.model = optimize_model(cls_pipeline.model, "dense")
print(f"Parameters count after optimization={int(cls_pipeline.model.num_parameters() / 1E6)}M")
predictions = cls_pipeline("This restaurant is awesome")
print(predictions)
```
| c503a2f1a28ecb194add302d6da79e2e |
KoichiYasuoka/deberta-large-japanese-wikipedia-ud-goeswith | KoichiYasuoka | deberta-v2 | 10 | 5 | transformers | 0 | token-classification | true | false | false | cc-by-sa-4.0 | ['ja'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['japanese', 'wikipedia', 'pos', 'dependency-parsing'] | false | true | true | 2,904 | false |
# deberta-large-japanese-wikipedia-ud-goeswith
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-large-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-wikipedia) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW).
## How to Use
```py
class UDgoeswith(object):
def __init__(self,bert):
from transformers import AutoTokenizer,AutoModelForTokenClassification
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForTokenClassification.from_pretrained(bert)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=self.tokenizer(text,return_offsets_mapping=True)
v=w["input_ids"]
x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)]
with torch.no_grad():
e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:]
r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())]
e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan)
g=self.model.config.label2id["X|_|goeswith"]
r=numpy.tri(e.shape[0])
for i in range(e.shape[0]):
for j in range(i+2,e.shape[1]):
r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1
e[:,:,g]+=numpy.where(r==0,0,numpy.nan)
m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan)
m[1:,1:]=numpy.nanmax(e,axis=2).transpose()
p=numpy.zeros(m.shape)
p[1:,1:]=numpy.nanargmax(e,axis=2).transpose()
for i in range(1,m.shape[0]):
m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan)
m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text+"\n"
v=[(s,e) for s,e in w["offset_mapping"] if s<e]
for i,(s,e) in enumerate(v,1):
q=self.model.config.id2label[p[i,h[i]]].split("|")
u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=UDgoeswith("KoichiYasuoka/deberta-large-japanese-wikipedia-ud-goeswith")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/).
Or without ufal.chu-liu-edmonds:
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-large-japanese-wikipedia-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
| 82279d289b32d70d1b81bf26fc060877 |
LowGI/my_asr_model_2 | LowGI | wav2vec2 | 9 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['minds14'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,508 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_asr_model_2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1785
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.0949 | 20.0 | 100 | 3.1923 | 1.0 |
| 3.0836 | 40.0 | 200 | 3.1769 | 1.0 |
| 3.0539 | 60.0 | 300 | 3.1766 | 1.0 |
| 3.0687 | 80.0 | 400 | 3.1853 | 1.0 |
| 3.0649 | 100.0 | 500 | 3.1785 | 1.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 1a41d71909c6d27e39dc5aa671f1f0a0 |
jonatasgrosman/exp_w2v2t_it_vp-nl_s222 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['it'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'it'] | false | true | true | 469 | false | # exp_w2v2t_it_vp-nl_s222
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| a678c2832cc8d2016ab0a3dfd113c6ee |
EP9/mt5-small-tuto-mt5-small-2 | EP9 | mt5 | 14 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,522 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-tuto-mt5-small-2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8564
- Rouge1: 0.4159
- Rouge2: 0.2906
- Rougel: 0.3928
- Rougelsum: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.1519 | 1.0 | 6034 | 1.8564 | 0.4159 | 0.2906 | 0.3928 | 0.3929 |
| 2.1289 | 2.0 | 12068 | 1.8564 | 0.4159 | 0.2906 | 0.3928 | 0.3929 |
| 2.1291 | 3.0 | 18102 | 1.8564 | 0.4159 | 0.2906 | 0.3928 | 0.3929 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| aa5b6d1ab0714786a416a99e3774b3bf |
Helsinki-NLP/opus-mt-tc-big-et-en | Helsinki-NLP | marian | 13 | 63 | transformers | 0 | translation | true | true | false | cc-by-4.0 | ['en', 'et'] | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 5,285 | false | # opus-mt-tc-big-et-en
Neural machine translation model for translating from Estonian (et) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): est
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/est-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT est-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/est-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Takso ootab.",
"Kon sa elät?"
]
model_name = "pytorch-models/opus-mt-tc-big-et-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Taxi's waiting.
# Kon you elät?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-et-en")
print(pipe("Takso ootab."))
# expected output: Taxi's waiting.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/est-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/est-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| est-eng | tatoeba-test-v2021-08-07 | 0.73707 | 59.7 | 1359 | 8811 |
| est-eng | flores101-devtest | 0.64463 | 38.6 | 1012 | 24721 |
| est-eng | newsdev2018 | 0.59899 | 33.8 | 2000 | 43068 |
| est-eng | newstest2018 | 0.60708 | 34.3 | 2000 | 45405 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:54:11 EEST 2022
* port machine: LM0-400-22516.local
| 54ac2f11f06c802bf538008dcd7c1bab |
jonatasgrosman/exp_w2v2t_th_unispeech-ml_s351 | jonatasgrosman | unispeech | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 503 | false | # exp_w2v2t_th_unispeech-ml_s351
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition on Thai using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| b65f8a50e70eeb4a0d1780693e8a5397 |
muhtasham/tiny-mlm-glue-qqp-target-glue-qnli | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,803 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qqp-target-glue-qnli
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qqp](https://huggingface.co/muhtasham/tiny-mlm-glue-qqp) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4653
- Accuracy: 0.7820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6087 | 0.15 | 500 | 0.5449 | 0.7311 |
| 0.5432 | 0.31 | 1000 | 0.5312 | 0.7390 |
| 0.521 | 0.46 | 1500 | 0.4937 | 0.7648 |
| 0.5144 | 0.61 | 2000 | 0.5254 | 0.7465 |
| 0.5128 | 0.76 | 2500 | 0.4786 | 0.7769 |
| 0.5037 | 0.92 | 3000 | 0.4670 | 0.7849 |
| 0.4915 | 1.07 | 3500 | 0.4569 | 0.7899 |
| 0.4804 | 1.22 | 4000 | 0.4689 | 0.7800 |
| 0.4676 | 1.37 | 4500 | 0.4725 | 0.7769 |
| 0.4738 | 1.53 | 5000 | 0.4653 | 0.7820 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 7e6ea3b80e09ec444abbd7117acd5696 |
sachinsahu/Heresy-clustered | sachinsahu | distilbert | 8 | 24 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,862 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sachinsahu/Heresy-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1596
- Train End Logits Accuracy: 0.9653
- Train Start Logits Accuracy: 0.9653
- Validation Loss: 0.4279
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 0.6667
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1596 | 0.9653 | 0.9653 | 0.4279 | 0.6667 | 0.6667 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 1c7173cc766225bffdb3586a7ad20055 |
burakyldrm/wav2vec2-burak-new-300-v2-2 | burakyldrm | wav2vec2 | 13 | 11 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,876 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6158
- Wer: 0.3094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 241
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 5.5201 | 8.62 | 500 | 3.1581 | 1.0 |
| 2.1532 | 17.24 | 1000 | 0.6883 | 0.5979 |
| 0.5465 | 25.86 | 1500 | 0.5028 | 0.4432 |
| 0.3287 | 34.48 | 2000 | 0.4986 | 0.4024 |
| 0.2571 | 43.1 | 2500 | 0.4920 | 0.3824 |
| 0.217 | 51.72 | 3000 | 0.5265 | 0.3724 |
| 0.1848 | 60.34 | 3500 | 0.5539 | 0.3714 |
| 0.1605 | 68.97 | 4000 | 0.5689 | 0.3670 |
| 0.1413 | 77.59 | 4500 | 0.5962 | 0.3501 |
| 0.1316 | 86.21 | 5000 | 0.5732 | 0.3494 |
| 0.1168 | 94.83 | 5500 | 0.5912 | 0.3461 |
| 0.1193 | 103.45 | 6000 | 0.5766 | 0.3378 |
| 0.0996 | 112.07 | 6500 | 0.5818 | 0.3403 |
| 0.0941 | 120.69 | 7000 | 0.5986 | 0.3315 |
| 0.0912 | 129.31 | 7500 | 0.5802 | 0.3280 |
| 0.0865 | 137.93 | 8000 | 0.5878 | 0.3290 |
| 0.0804 | 146.55 | 8500 | 0.5784 | 0.3228 |
| 0.0739 | 155.17 | 9000 | 0.5791 | 0.3180 |
| 0.0718 | 163.79 | 9500 | 0.5864 | 0.3146 |
| 0.0681 | 172.41 | 10000 | 0.6104 | 0.3178 |
| 0.0688 | 181.03 | 10500 | 0.5983 | 0.3160 |
| 0.0657 | 189.66 | 11000 | 0.6228 | 0.3203 |
| 0.0598 | 198.28 | 11500 | 0.6057 | 0.3122 |
| 0.0597 | 206.9 | 12000 | 0.6094 | 0.3129 |
| 0.0551 | 215.52 | 12500 | 0.6114 | 0.3127 |
| 0.0507 | 224.14 | 13000 | 0.6056 | 0.3094 |
| 0.0554 | 232.76 | 13500 | 0.6158 | 0.3094 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| f427a37aa5dee1b482ebcf65b108947e |
philschmid/setfit-ag-news-endpoint | philschmid | mpnet | 15 | 10 | transformers | 4 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['setfit', 'endpoints-template', 'text-classification'] | false | true | true | 2,581 | false |
# SetFit AG News
This is a [SetFit](https://github.com/huggingface/setfit/tree/main) classifier fine-tuned on the [AG News](https://huggingface.co/datasets/ag_news) dataset.
The model was created following the [Outperform OpenAI GPT-3 with SetFit for text-classifiation](https://www.philschmid.de/getting-started-setfit) blog post of [Philipp Schmid](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
The model achieves an accuracy of 0.87 on the test set and was only trained with `32` total examples (8 per class).
```bash
***** Running evaluation *****
model used: sentence-transformers/all-mpnet-base-v2
train dataset: 32 samples
accuracy: 0.8731578947368421
```
#### What is SetFit?
"SetFit" (https://arxiv.org/abs/2209.11055) is a new approach that can be used to create high accuracte text-classification models with limited labeled data. SetFit is outperforming GPT-3 in 7 out of 11 tasks, while being 1600x smaller.
Check out the blog to learn more: [Outperform OpenAI GPT-3 with SetFit for text-classifiation](https://www.philschmid.de/getting-started-setfit)
# Inference Endpoints
The model repository also implements a generic custom `handler.py` as an example for how to use `SetFit` models with [inference-endpoints](https://hf.co/inference-endpoints).
Code: https://huggingface.co/philschmid/setfit-ag-news-endpoint/blob/main/handler.py
## Send requests with Pyton
We are going to use requests to send our requests. (make your you have it installed `pip install requests`)
```python
import json
import requests as r
ENDPOINT_URL=""# url of your endpoint
HF_TOKEN=""
# payload samples
regular_payload = { "inputs": "Coming to The Rescue Got a unique problem? Not to worry: you can find a financial planner for every specialized need"}
# HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
# send request
response = r.post(ENDPOINT_URL, headers=headers, json=paramter_payload)
classified = response.json()
print(classified)
# [ { "label": "World", "score": 0.12341519122860946 }, { "label": "Sports", "score": 0.11741269832494523 }, { "label": "Business", "score": 0.6124446065942992 }, { "label": "Sci/Tech", "score": 0.14672750385214603 } ]
```
**curl example**
```bash
curl https://YOURDOMAIN.us-east-1.aws.endpoints.huggingface.cloud \
-X POST \
-d '{"inputs": "Coming to The Rescue Got a unique problem? Not to worry: you can find a financial planner for every specialized need"}' \
-H "Authorization: Bearer XXX" \
-H "Content-Type: application/json"
``` | 8846dbc12dfa600967e8da463032fbe5 |
grantslewis/spelling-correction-english-base-finetuned-places | grantslewis | bart | 14 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,329 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spelling-correction-english-base-finetuned-places
This model is a fine-tuned version of [oliverguhr/spelling-correction-english-base](https://huggingface.co/oliverguhr/spelling-correction-english-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1461
- Cer: 0.0143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1608 | 1.0 | 875 | 0.1461 | 0.0143 |
| 0.13 | 2.0 | 1750 | 0.1461 | 0.0143 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| 5f18caad77a96f83d2e7183d73cfc147 |
darshanz/occupation-prediction | darshanz | vit | 7 | 3 | transformers | 0 | image-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,570 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# darshanz/occupation-prediction
This model is ViT base patch16. Which is pretrained on imagenet dataset, then trained on our custom dataset which is based on occupation prediction. This dataset contains facial images of Indian people which are labeled by occupation. This model predicts the occupation of a person from the facial image of a person. This model categorizes input facial images into 5 classes: Anchor, Athlete, Doctor, Professor, and Farmer. This model gives an accuracy of 84.43%.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 70, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.4}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 1.0840 | 0.6156 | 0.8813 | 0.6843 | 0.75 | 0.9700 | 0 |
| 0.4686 | 0.8406 | 0.9875 | 0.5345 | 0.8100 | 0.9867 | 1 |
| 0.2600 | 0.9312 | 0.9953 | 0.4805 | 0.8333 | 0.9800 | 2 |
| 0.1515 | 0.9609 | 0.9969 | 0.5071 | 0.8267 | 0.9733 | 3 |
| 0.0746 | 0.9875 | 1.0 | 0.4853 | 0.8500 | 0.9833 | 4 |
| 0.0468 | 0.9953 | 1.0 | 0.5006 | 0.8433 | 0.9733 | 5 |
| 0.0378 | 0.9953 | 1.0 | 0.4967 | 0.8433 | 0.9800 | 6 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Tokenizers 0.12.1
| f83b31cf984d7a638f9f9a3bd17e7828 |
ManqingLiu/distilbert-base-uncased-finetuned-clinc | ManqingLiu | distilbert | 14 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,482 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7755
- Accuracy: 0.9171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2834 | 0.7394 |
| 2.6289 | 2.0 | 636 | 1.8732 | 0.8348 |
| 1.5479 | 3.0 | 954 | 1.1580 | 0.8903 |
| 1.0135 | 4.0 | 1272 | 0.8585 | 0.9077 |
| 0.7968 | 5.0 | 1590 | 0.7755 | 0.9171 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 10dc45e78b39edccf8e7cf56223139e9 |
jiobiala24/wav2vec2-base-checkpoint-4 | jiobiala24 | wav2vec2 | 13 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,025 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-4
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-3](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-3) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| 0e897d770dda9a87aeb9429c6c3552df |
Helsinki-NLP/opus-mt-urj-urj | Helsinki-NLP | marian | 11 | 17 | transformers | 0 | translation | true | true | false | apache-2.0 | ['se', 'fi', 'hu', 'et', 'urj'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 3,584 | false |
### urj-urj
* source group: Uralic languages
* target group: Uralic languages
* OPUS readme: [urj-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.est-est.est.est | 5.1 | 0.288 |
| Tatoeba-test.est-fin.est.fin | 50.9 | 0.709 |
| Tatoeba-test.est-fkv.est.fkv | 0.7 | 0.215 |
| Tatoeba-test.est-vep.est.vep | 1.0 | 0.154 |
| Tatoeba-test.fin-est.fin.est | 55.5 | 0.718 |
| Tatoeba-test.fin-fkv.fin.fkv | 1.8 | 0.254 |
| Tatoeba-test.fin-hun.fin.hun | 45.0 | 0.672 |
| Tatoeba-test.fin-izh.fin.izh | 7.1 | 0.492 |
| Tatoeba-test.fin-krl.fin.krl | 2.6 | 0.278 |
| Tatoeba-test.fkv-est.fkv.est | 0.6 | 0.099 |
| Tatoeba-test.fkv-fin.fkv.fin | 15.5 | 0.444 |
| Tatoeba-test.fkv-liv.fkv.liv | 0.6 | 0.101 |
| Tatoeba-test.fkv-vep.fkv.vep | 0.6 | 0.113 |
| Tatoeba-test.hun-fin.hun.fin | 46.3 | 0.675 |
| Tatoeba-test.izh-fin.izh.fin | 13.4 | 0.431 |
| Tatoeba-test.izh-krl.izh.krl | 2.9 | 0.078 |
| Tatoeba-test.krl-fin.krl.fin | 14.1 | 0.439 |
| Tatoeba-test.krl-izh.krl.izh | 1.0 | 0.125 |
| Tatoeba-test.liv-fkv.liv.fkv | 0.9 | 0.170 |
| Tatoeba-test.liv-vep.liv.vep | 2.6 | 0.176 |
| Tatoeba-test.multi.multi | 32.9 | 0.580 |
| Tatoeba-test.vep-est.vep.est | 3.4 | 0.265 |
| Tatoeba-test.vep-fkv.vep.fkv | 0.9 | 0.239 |
| Tatoeba-test.vep-liv.vep.liv | 2.6 | 0.190 |
### System Info:
- hf_name: urj-urj
- source_languages: urj
- target_languages: urj
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'urj']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt
- src_alpha3: urj
- tgt_alpha3: urj
- short_pair: urj-urj
- chrF2_score: 0.58
- bleu: 32.9
- brevity_penalty: 1.0
- ref_len: 19444.0
- src_name: Uralic languages
- tgt_name: Uralic languages
- train_date: 2020-07-27
- src_alpha2: urj
- tgt_alpha2: urj
- prefer_old: False
- long_pair: urj-urj
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | b1e510b325a763c3db63d62f71b39880 |
Helsinki-NLP/opus-mt-ko-fi | Helsinki-NLP | marian | 11 | 12 | transformers | 0 | translation | true | true | false | apache-2.0 | ['ko', 'fi'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,037 | false |
### kor-fin
* source group: Korean
* target group: Finnish
* OPUS readme: [kor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fin/README.md)
* model: transformer-align
* source language(s): kor kor_Hang kor_Latn
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kor.fin | 26.6 | 0.502 |
### System Info:
- hf_name: kor-fin
- source_languages: kor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ko', 'fi']
- src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.test.txt
- src_alpha3: kor
- tgt_alpha3: fin
- short_pair: ko-fi
- chrF2_score: 0.502
- bleu: 26.6
- brevity_penalty: 0.892
- ref_len: 2251.0
- src_name: Korean
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: ko
- tgt_alpha2: fi
- prefer_old: False
- long_pair: kor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 183a7c49c176aba3d9c26608533b4144 |
MeshalAlamr/wav2vec2-xls-r-300m-arabic_speech_commands_10s_one_speaker_5_classes_unknown | MeshalAlamr | wav2vec2 | 10 | 3 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 6,262 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-arabic_speech_commands_10s_one_speaker_5_classes_unknown
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7951
- Accuracy: 0.8278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 1.7933 | 0.2306 |
| No log | 1.8 | 6 | 1.7927 | 0.1667 |
| No log | 2.8 | 9 | 1.7908 | 0.1667 |
| 2.1874 | 3.8 | 12 | 1.7836 | 0.225 |
| 2.1874 | 4.8 | 15 | 1.7728 | 0.3083 |
| 2.1874 | 5.8 | 18 | 1.6857 | 0.3889 |
| 2.0742 | 6.8 | 21 | 1.4554 | 0.7333 |
| 2.0742 | 7.8 | 24 | 1.2621 | 0.6861 |
| 2.0742 | 8.8 | 27 | 1.0360 | 0.7528 |
| 1.5429 | 9.8 | 30 | 1.0220 | 0.6472 |
| 1.5429 | 10.8 | 33 | 0.7951 | 0.8278 |
| 1.5429 | 11.8 | 36 | 0.7954 | 0.8111 |
| 1.5429 | 12.8 | 39 | 0.6698 | 0.8167 |
| 0.927 | 13.8 | 42 | 0.8400 | 0.6694 |
| 0.927 | 14.8 | 45 | 0.7026 | 0.7194 |
| 0.927 | 15.8 | 48 | 0.7232 | 0.6944 |
| 0.429 | 16.8 | 51 | 0.6640 | 0.7333 |
| 0.429 | 17.8 | 54 | 1.1750 | 0.6 |
| 0.429 | 18.8 | 57 | 0.9270 | 0.6722 |
| 0.2583 | 19.8 | 60 | 1.4541 | 0.5417 |
| 0.2583 | 20.8 | 63 | 1.8917 | 0.4472 |
| 0.2583 | 21.8 | 66 | 1.3213 | 0.6472 |
| 0.2583 | 22.8 | 69 | 1.3114 | 0.7 |
| 0.1754 | 23.8 | 72 | 0.8079 | 0.7389 |
| 0.1754 | 24.8 | 75 | 1.6070 | 0.4861 |
| 0.1754 | 25.8 | 78 | 1.6949 | 0.5083 |
| 0.1348 | 26.8 | 81 | 1.4364 | 0.6472 |
| 0.1348 | 27.8 | 84 | 0.9045 | 0.7889 |
| 0.1348 | 28.8 | 87 | 1.1878 | 0.7111 |
| 0.0634 | 29.8 | 90 | 0.9678 | 0.7667 |
| 0.0634 | 30.8 | 93 | 0.9572 | 0.7889 |
| 0.0634 | 31.8 | 96 | 0.8931 | 0.8139 |
| 0.0634 | 32.8 | 99 | 1.4805 | 0.6583 |
| 0.1267 | 33.8 | 102 | 2.6092 | 0.4778 |
| 0.1267 | 34.8 | 105 | 2.2933 | 0.5306 |
| 0.1267 | 35.8 | 108 | 1.9648 | 0.6083 |
| 0.0261 | 36.8 | 111 | 1.8385 | 0.65 |
| 0.0261 | 37.8 | 114 | 2.0328 | 0.6028 |
| 0.0261 | 38.8 | 117 | 2.3722 | 0.55 |
| 0.041 | 39.8 | 120 | 2.7606 | 0.4917 |
| 0.041 | 40.8 | 123 | 2.5793 | 0.5056 |
| 0.041 | 41.8 | 126 | 2.0967 | 0.5917 |
| 0.041 | 42.8 | 129 | 1.7498 | 0.6611 |
| 0.1004 | 43.8 | 132 | 1.6564 | 0.6722 |
| 0.1004 | 44.8 | 135 | 1.7533 | 0.6583 |
| 0.1004 | 45.8 | 138 | 2.3335 | 0.5806 |
| 0.0688 | 46.8 | 141 | 2.9578 | 0.4778 |
| 0.0688 | 47.8 | 144 | 3.2396 | 0.4472 |
| 0.0688 | 48.8 | 147 | 3.2100 | 0.4528 |
| 0.0082 | 49.8 | 150 | 3.2018 | 0.4472 |
| 0.0082 | 50.8 | 153 | 3.1985 | 0.45 |
| 0.0082 | 51.8 | 156 | 2.6950 | 0.525 |
| 0.0082 | 52.8 | 159 | 2.2335 | 0.6056 |
| 0.0159 | 53.8 | 162 | 2.0467 | 0.6306 |
| 0.0159 | 54.8 | 165 | 1.8858 | 0.6583 |
| 0.0159 | 55.8 | 168 | 1.8239 | 0.6694 |
| 0.0083 | 56.8 | 171 | 1.7927 | 0.675 |
| 0.0083 | 57.8 | 174 | 1.7636 | 0.6861 |
| 0.0083 | 58.8 | 177 | 1.7792 | 0.675 |
| 0.0645 | 59.8 | 180 | 1.9165 | 0.6611 |
| 0.0645 | 60.8 | 183 | 2.0780 | 0.6361 |
| 0.0645 | 61.8 | 186 | 2.2058 | 0.6028 |
| 0.0645 | 62.8 | 189 | 2.3011 | 0.5944 |
| 0.01 | 63.8 | 192 | 2.4047 | 0.5722 |
| 0.01 | 64.8 | 195 | 2.4870 | 0.5639 |
| 0.01 | 65.8 | 198 | 2.5513 | 0.5417 |
| 0.008 | 66.8 | 201 | 2.5512 | 0.5333 |
| 0.008 | 67.8 | 204 | 2.3419 | 0.5778 |
| 0.008 | 68.8 | 207 | 2.2424 | 0.5944 |
| 0.0404 | 69.8 | 210 | 2.2009 | 0.6167 |
| 0.0404 | 70.8 | 213 | 2.1788 | 0.6278 |
| 0.0404 | 71.8 | 216 | 2.1633 | 0.6306 |
| 0.0404 | 72.8 | 219 | 2.1525 | 0.6306 |
| 0.0106 | 73.8 | 222 | 2.1435 | 0.6389 |
| 0.0106 | 74.8 | 225 | 2.1391 | 0.6389 |
| 0.0106 | 75.8 | 228 | 2.1327 | 0.6472 |
| 0.0076 | 76.8 | 231 | 2.1287 | 0.6444 |
| 0.0076 | 77.8 | 234 | 2.1267 | 0.65 |
| 0.0076 | 78.8 | 237 | 2.1195 | 0.6472 |
| 0.0097 | 79.8 | 240 | 2.1182 | 0.6417 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| d1f9d07bf8c1920a9c153db4559b996e |
ThomasSimonini/stable-baselines3-ppo-LunarLander-v2 | ThomasSimonini | null | 4 | 0 | null | 0 | reinforcement-learning | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['deep-reinforcement-learning', 'reinforcement-learning'] | false | true | true | 2,308 | false | model-index:
- name: stable-baselines3-ppo-LunarLander-v2
---
# ARCHIVED MODEL, DO NOT USE IT
# stable-baselines3-ppo-LunarLander-v2 🚀👩🚀
This is a saved model of a PPO agent playing [LunarLander-v2](https://gym.openai.com/envs/LunarLander-v2/). The model is taken from [rl-baselines3-zoo](https://github.com/DLR-RM/rl-trained-agents)
The goal is to correctly land the lander by controlling firing engines (fire left orientation engine, fire main engine and fire right orientation engine).
<iframe width="560" height="315" src="https://www.youtube.com/embed/kE-Fvht81I0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
👉 You can watch the agent playing by using this [notebook](https://colab.research.google.com/drive/19OonMRkMyCH6Dg0ECFQi7evxMRqkW3U0?usp=sharing)
## Use the Model
### Install the dependencies
You need to use the [Stable Baselines 3 Hugging Face version](https://github.com/simoninithomas/stable-baselines3) of the library (this version contains the function to load saved models directly from the Hugging Face Hub):
```
pip install git+https://github.com/simoninithomas/stable-baselines3.git
```
### Evaluate the agent
⚠️You need to have Linux or MacOS to be able to use this environment. If it's not the case you can use the [colab notebook](https://colab.research.google.com/drive/19OonMRkMyCH6Dg0ECFQi7evxMRqkW3U0#scrollTo=Qbzj9quh0FsP)
```
# Import the libraries
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Load the environment
env = gym.make('LunarLander-v2')
model = PPO.load_from_huggingface(hf_model_id="ThomasSimonini/stable-baselines3-ppo-LunarLander-v2",hf_model_filename="LunarLander-v2")
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
```
## Results
Mean Reward (10 evaluation episodes): 245.63 +/- 10.02 | 8bd1b8be6581daa15f38c5d7d42b957f |
aware-ai/wav2vec2-xls-r-1b-5gram-german | aware-ai | wav2vec2 | 13 | 24 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard'] | true | true | true | 2,364 | false |
## Evaluation
The model can be evaluated as follows on the German test data of Common Voice.
```python
import torch
from transformers import AutoModelForCTC, AutoProcessor
from unidecode import unidecode
import re
from datasets import load_dataset, load_metric
import datasets
counter = 0
wer_counter = 0
cer_counter = 0
device = "cuda" if torch.cuda.is_available() else "cpu"
special_chars = [["Ä"," AE "], ["Ö"," OE "], ["Ü"," UE "], ["ä"," ae "], ["ö"," oe "], ["ü"," ue "]]
def clean_text(sentence):
for special in special_chars:
sentence = sentence.replace(special[0], special[1])
sentence = unidecode(sentence)
for special in special_chars:
sentence = sentence.replace(special[1], special[0])
sentence = re.sub("[^a-zA-Z0-9öäüÖÄÜ ,.!?]", " ", sentence)
return sentence
def main(model_id):
print("load model")
model = AutoModelForCTC.from_pretrained(model_id).to(device)
print("load processor")
processor = AutoProcessor.from_pretrained(processor_id)
print("load metrics")
wer = load_metric("wer")
cer = load_metric("cer")
ds = load_dataset("mozilla-foundation/common_voice_8_0","de")
ds = ds["test"]
ds = ds.cast_column(
"audio", datasets.features.Audio(sampling_rate=16_000)
)
def calculate_metrics(batch):
global counter, wer_counter, cer_counter
resampled_audio = batch["audio"]["array"]
input_values = processor(resampled_audio, return_tensors="pt", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values.to(device)).logits.cpu().numpy()[0]
decoded = processor.decode(logits)
pred = decoded.text.lower()
ref = clean_text(batch["sentence"]).lower()
wer_result = wer.compute(predictions=[pred], references=[ref])
cer_result = cer.compute(predictions=[pred], references=[ref])
counter += 1
wer_counter += wer_result
cer_counter += cer_result
if counter % 100 == True:
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
return batch
ds.map(calculate_metrics, remove_columns=ds.column_names)
print(f"WER: {(wer_counter/counter)*100} | CER: {(cer_counter/counter)*100}")
model_id = "flozi00/wav2vec2-xls-r-1b-5gram-german"
main(model_id)
``` | 7d2172902b041af49c49dd2f1f39fd70 |
raisinbl/distilbert-base-uncased-finetuned-squad_2_512_1 | raisinbl | distilbert | 14 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,295 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad_2_512_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2681 | 1.0 | 4079 | 1.2434 |
| 1.0223 | 2.0 | 8158 | 1.3153 |
| 0.865 | 3.0 | 12237 | 1.3225 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 2ce9b0bf7339625e7a62c03944e0dfce |
dbmdz/bert-base-historic-english-cased | dbmdz | bert | 20 | 24 | transformers | 1 | fill-mask | true | false | true | mit | ['english'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 9,317 | false |
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
![German Europeana Corpus Stats](stats/figures/german_europeana_corpus_stats.png)
## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
![French Europeana Corpus Stats](stats/figures/french_europeana_corpus_stats.png)
## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:
![British Library Corpus Stats](stats/figures/bl_corpus_stats.png)
## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:
![Finnish Europeana Corpus Stats](stats/figures/finnish_europeana_corpus_stats.png)
## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:
![Swedish Europeana Corpus Stats](stats/figures/swedish_europeana_corpus_stats.png)
## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:
![All Corpora Stats](stats/figures/all_corpus_stats.png)
# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:
![Training loss curve](stats/figures/pretraining_loss_historic-multilingual.png)
## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:
![Training loss curve](stats/figures/pretraining_loss_historic_english.png)
## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:
![Training loss curve](stats/figures/pretraining_loss_finnish_europeana.png)
## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:
![Training loss curve](stats/figures/pretraining_loss_swedish_europeana.png)
# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
| 51be2b9a515e5e761591542fadbb3029 |
Salesforce/codegen-16B-mono | Salesforce | codegen | 10 | 2,476 | transformers | 39 | text-generation | true | false | false | bsd-3-clause | null | null | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | [] | false | true | true | 2,941 | false | # CodeGen (CodeGen-Mono 16B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 16B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 16B* and further pre-trained on a Python programming language dataset, and "16B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 16B) was firstly initialized with *CodeGen-Multi 16B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
| d5bcc8f5d434d9aef4d9bdf1e84f33bf |
gstqtfr/ddpm-butterflies-128 | gstqtfr | null | 19 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,227 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/gstqtfr/ddpm-butterflies-128/tensorboard?#scalars)
| a14d268f37057aadda595f11191320d2 |
racai/e4a-permits-distilbert-base-romanian-cased | racai | distilbert | 9 | 4 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,774 | false | The model generated in the Enrich4All project.<br>
Evaluated the perplexity of MLM Task fine-tuned for construction permits related corpus.<br>
Baseline model: https://huggingface.co/racai/distilbert-base-romanian-cased <br>
Scripts and corpus used for training: https://github.com/racai-ai/e4all-models
Corpus
---------------
The construction authorization corpus is meant to ease the task of interested people to get informed on the legal framework related to activities like building, repairing, extending, and modifying their living environment, or setup of economic activities like establishing commercial or industrial centers. It is aimed as well to ease and reduce the activity of official representatives of regional administrative centers. The corpus is built to comply with the Romanian legislation in this domain and is structured in sets of labeled questions with a single answer each, covering various categories of issues:
* Construction activities and operations, including industrial structures, which require or do not require authorization,
* The necessary steps and documents to be acquired according to the Romanian regulations,
* validity terms,
* involved costs.
The data is acquired from two main sources:
* Internet: official sites, frequently asked questions
* Personal experiences of people: building permanent or provisory structures, replacing roofs, fences, installing photovoltaic panels, etc.
<br><br>
The construction permits corpus contains 500,351 words in 110 UTF-8 encoded files.
Results
-----------------
| MLM Task | Perplexity |
| --------------------------------- | ------------- |
| Baseline | 62.79 |
| Construction Permits Fine-tuning | 7.13 |
| 5beee211cdf9461616a53ab7c0dbad78 |
Gustavosta/MagicPrompt-Stable-Diffusion | Gustavosta | gpt2 | 10 | 34,242 | transformers | 264 | text-generation | true | false | false | mit | null | null | null | 1 | 0 | 0 | 1 | 5 | 4 | 1 | [] | false | true | true | 1,642 | false |
# MagicPrompt - Stable Diffusion
This is a model from the MagicPrompt series of models, which are [GPT-2](https://huggingface.co/gpt2) models intended to generate prompt texts for imaging AIs, in this case: [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion).
## 🖼️ Here's an example:
<img src="https://files.catbox.moe/ac3jq7.png">
This model was trained with 150,000 steps and a set of about 80,000 data filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare, but if you want to take a look at the original dataset, you can have a look here: [datasets/Gustavosta/Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts).
If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)".
## 💻 You can see other MagicPrompt models:
- For Dall-E 2: [Gustavosta/MagicPrompt-Dalle](https://huggingface.co/Gustavosta/MagicPrompt-Dalle)
- For Midjourney: [Gustavosta/MagicPrompt-Midourney](https://huggingface.co/Gustavosta/MagicPrompt-Midjourney) **[⚠️ In progress]**
- MagicPrompt full: [Gustavosta/MagicPrompt](https://huggingface.co/Gustavosta/MagicPrompt) **[⚠️ In progress]**
## ⚖️ Licence:
[MIT](https://huggingface.co/models?license=license:mit)
When using this model, please credit: [Gustavosta](https://huggingface.co/Gustavosta)
**Thanks for reading this far! :)**
| 17baacdefe42aa3ea803cba2435b4ea9 |
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_sst2_256 | gokuls | mobilebert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,903 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_sst2_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2641
- Accuracy: 0.7076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5438 | 1.0 | 527 | 1.4012 | 0.5814 |
| 1.364 | 2.0 | 1054 | 1.5474 | 0.5413 |
| 1.2907 | 3.0 | 1581 | 1.5138 | 0.5642 |
| 1.257 | 4.0 | 2108 | 1.4409 | 0.5665 |
| 1.2417 | 5.0 | 2635 | 1.4473 | 0.5929 |
| 1.2056 | 6.0 | 3162 | 1.2641 | 0.7076 |
| 0.6274 | 7.0 | 3689 | nan | 0.4908 |
| 0.0 | 8.0 | 4216 | nan | 0.4908 |
| 0.0 | 9.0 | 4743 | nan | 0.4908 |
| 0.0 | 10.0 | 5270 | nan | 0.4908 |
| 0.0 | 11.0 | 5797 | nan | 0.4908 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 9357fbfd770a0f380cce507aa5ef107a |
muhtasham/small-mlm-glue-stsb | muhtasham | bert | 12 | 10 | transformers | 1 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,496 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-stsb
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2666 | 0.7 | 500 | 2.7896 |
| 3.0117 | 1.39 | 1000 | 2.8245 |
| 2.9461 | 2.09 | 1500 | 2.7108 |
| 2.7341 | 2.78 | 2000 | 2.6721 |
| 2.7235 | 3.48 | 2500 | 2.6946 |
| 2.6687 | 4.17 | 3000 | 2.7103 |
| 2.5373 | 4.87 | 3500 | 2.7187 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 5e0bab21477dce15d7b1c50ae1e0573f |
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-5_female-5_s544 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 476 | false | # exp_w2v2r_en_xls-r_gender_male-5_female-5_s544
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 9a8fa87f2fdac2cb92267530a815c35d |
sd-concepts-library/dreamcore | sd-concepts-library | null | 9 | 0 | null | 11 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,018 | false | ### Dreamcore on Stable Diffusion
This is the `<dreamcore>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<dreamcore> 0](https://huggingface.co/sd-concepts-library/dreamcore/resolve/main/concept_images/0.jpeg)
![<dreamcore> 1](https://huggingface.co/sd-concepts-library/dreamcore/resolve/main/concept_images/1.jpeg)
![<dreamcore> 2](https://huggingface.co/sd-concepts-library/dreamcore/resolve/main/concept_images/3.jpeg)
![<dreamcore> 3](https://huggingface.co/sd-concepts-library/dreamcore/resolve/main/concept_images/2.jpeg)
| d76001009a517995db766cab47acc1d5 |
w11wo/sundanese-roberta-base | w11wo | roberta | 18 | 4 | transformers | 2 | fill-mask | true | false | true | mit | ['su'] | ['mc4', 'cc100', 'oscar', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sundanese-roberta-base'] | false | true | true | 3,610 | false |
## Sundanese RoBERTa Base
Sundanese RoBERTa Base is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on four datasets: [OSCAR](https://hf.co/datasets/oscar)'s `unshuffled_deduplicated_su` subset, the Sundanese [mC4](https://hf.co/datasets/mc4) subset, the Sundanese [CC100](https://hf.co/datasets/cc100) subset, and Sundanese [Wikipedia](https://su.wikipedia.org/).
10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 1.952 and an evaluation accuracy of 63.98%.
This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/w11wo/sundanese-roberta-base/tree/main) tab, as well as the [Training metrics](https://hf.co/w11wo/sundanese-roberta-base/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------ | ------- | ------- | ------------------------------------- |
| `sundanese-roberta-base` | 124M | RoBERTa | OSCAR, mC4, CC100, Wikipedia (758 MB) |
## Evaluation Results
The model was trained for 50 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid accuracy | total time |
| ---------- | ---------- | -------------- | ---------- |
| 1.965 | 1.952 | 0.6398 | 6:24:51 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/sundanese-roberta-base"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Budi nuju <mask> di sakola.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "w11wo/sundanese-roberta-base"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Budi nuju diajar di sakola."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from all four datasets that may be carried over into the results of this model.
## Author
Sundanese RoBERTa Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/).
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` | 29af2d7ac8f4b78ecc7582c5f9865334 |
federicopascual/distilbert-base-uncased-finetuned-cola | federicopascual | distilbert | 13 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7480
- Matthews Correlation: 0.5370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5292 | 1.0 | 535 | 0.5110 | 0.4239 |
| 0.3508 | 2.0 | 1070 | 0.4897 | 0.4993 |
| 0.2346 | 3.0 | 1605 | 0.6275 | 0.5029 |
| 0.1806 | 4.0 | 2140 | 0.7480 | 0.5370 |
| 0.1291 | 5.0 | 2675 | 0.8841 | 0.5200 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 5c7be50df0a3c24ae5a664b035b18357 |
SupriyaArun/distilbert-base-uncased-finetuned-squad | SupriyaArun | distilbert | 12 | 7 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,285 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2213 | 1.0 | 5533 | 1.1560 |
| 0.943 | 2.0 | 11066 | 1.1227 |
| 0.7633 | 3.0 | 16599 | 1.1569 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1592e1a7232ae6f8607ea2c7f2abb8da |
koankoan/mf-doomfusion | koankoan | null | 36 | 3 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 2,469 | false | ### MF Doomfusion Dreambooth model trained by koankoan with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
Doomfusion (use that on your prompt)
![Doomfusion 0](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%281%29.jpg)![Doomfusion 1](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%282%29.jpg)![Doomfusion 2](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%283%29.jpg)![Doomfusion 3](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%284%29.jpg)![Doomfusion 4](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%285%29.jpg)![Doomfusion 5](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%286%29.jpg)![Doomfusion 6](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%287%29.jpg)![Doomfusion 7](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%288%29.jpg)![Doomfusion 8](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%289%29.jpg)![Doomfusion 9](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2810%29.jpg)![Doomfusion 10](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2811%29.jpg)![Doomfusion 11](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2812%29.jpg)![Doomfusion 12](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2813%29.jpg)![Doomfusion 13](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2814%29.jpg)![Doomfusion 14](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2815%29.jpg)![Doomfusion 15](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2816%29.jpg)![Doomfusion 16](https://huggingface.co/koankoan/mf-doomfusion/resolve/main/concept_images/Doomfusion_%2817%29.jpg)
| cf40c59967e20a8f59e38f71f9ee48d7 |
jonatasgrosman/exp_w2v2t_fa_xls-r_s44 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fa'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fa'] | false | true | true | 452 | false | # exp_w2v2t_fa_xls-r_s44
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| cd4e41787e671c4e23cb77b0208e0bc0 |
hassnain/wav2vec2-base-timit-demo-colab57 | hassnain | wav2vec2 | 12 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,702 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab57
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7328
- Wer: 0.4593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9876 | 7.04 | 500 | 3.1483 | 1.0 |
| 1.4621 | 14.08 | 1000 | 0.6960 | 0.6037 |
| 0.4404 | 21.13 | 1500 | 0.6392 | 0.5630 |
| 0.2499 | 28.17 | 2000 | 0.6738 | 0.5281 |
| 0.1732 | 35.21 | 2500 | 0.6789 | 0.4952 |
| 0.1347 | 42.25 | 3000 | 0.7328 | 0.4835 |
| 0.1044 | 49.3 | 3500 | 0.7258 | 0.4840 |
| 0.0896 | 56.34 | 4000 | 0.7328 | 0.4593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| d604d14906b444dcf6197f4e12ddb2da |
yongzx/gpt2-finetuned-oscar-ko | yongzx | gpt2 | 9 | 80 | transformers | 0 | feature-extraction | true | false | false | mit | ['ko'] | ['oscar'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-generation'] | false | true | true | 1,121 | false | # GPT-2 finetuned on Korean Dataset
### Tokenizer
We first trained a tokenizer on OSCAR's `unshuffled_original_ko` Korean data subset by following the training of GPT2 tokenizer (same vocab size of 50,257). Here's the [Python file](https://github.com/bigscience-workshop/multilingual-modeling/blob/gpt2-ko/experiments/exp-001/train_tokenizer_gpt2.py) for the training.
### Model
We finetuned the `wte` and `wpe` layers of GPT-2 (while freezing the parameters of all other layers) on OSCAR's `unshuffled_original_ko` Korean data subset. We used [Huggingface's code](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) for fine-tuning the causal language model GPT-2, but with the following parameters changed
```
- preprocessing_num_workers: 8
- per_device_train_batch_size: 2
- gradient_accumulation_steps: 4
- per_device_eval_batch_size: 2
- eval_accumulation_steps: 4
- eval_steps: 1000
- evaluation_strategy: "steps"
- max_eval_samples: 5000
```
**Training details**: total training steps: 688000, effective train batch size per step: 32, max tokens per batch: 1024) | e7f5f9668f77403ebca9c491be3bd129 |
BSC-LT/roberta-base-ca | BSC-LT | roberta | 9 | 4 | transformers | 3 | fill-mask | true | false | false | apache-2.0 | ['ca'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['masked-lm', 'BERTa', 'catalan'] | false | true | true | 11,501 | false |
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca
# BERTa: RoBERTa-based Catalan language model
## BibTeX citation
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
## Model description
BERTa is a transformer-based masked language model for the Catalan language.
It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model
and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Training corpora and preprocessing
The training corpus consists of several corpora gathered from web crawling and public corpora.
The publicly available corpora are:
1. the Catalan part of the [DOGC](http://opus.nlpl.eu/DOGC-v2.php) corpus, a set of documents from the Official Gazette of the Catalan Government
2. the [Catalan Open Subtitles](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.ca.gz), a collection of translated movie subtitles
3. the non-shuffled version of the Catalan part of the [OSCAR](https://traces1.inria.fr/oscar/) corpus \\\\cite{suarez2019asynchronous},
a collection of monolingual corpora, filtered from [Common Crawl](https://commoncrawl.org/about/)
4. The [CaWac](http://nlp.ffzg.hr/resources/corpora/cawac/) corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013
the non-deduplicated version
5. the [Catalan Wikipedia articles](https://ftp.acc.umu.se/mirror/wikimedia.org/dumps/cawiki/20200801/) downloaded on 18-08-2020.
The crawled corpora are:
6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains
7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government
8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the [Catalan News Agency](https://www.acn.cat/)
To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others,
sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents.
During the process, we keep document boundaries are kept.
Finally, the corpora are concatenated and further global deduplication among the corpora is applied.
The final training corpus consists of about 1,8B tokens.
## Tokenization and pretraining
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens.
The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model
with the same hyperparameters as in the original work.
The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.
## Evaluation
## CLUB benchmark
The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),
that has been created along with the model.
It contains the following tasks and their related datasets:
1. Part-of-Speech Tagging (POS)
Catalan-Ancora: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus
2. Named Entity Recognition (NER)
**[AnCora Catalan 2.0.0](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version,
filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format
3. Text Classification (TC)
**[TeCla](https://doi.org/10.5281/zenodo.4627197)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus
4. Semantic Textual Similarity (STS)
**[Catalan semantic textual similarity](https://doi.org/10.5281/zenodo.4529183)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them,
scraped from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349)
5. Question Answering (QA):
**[ViquiQuAD](https://doi.org/10.5281/zenodo.4562344)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.
**[XQuAD](https://doi.org/10.5281/zenodo.4526223)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_
Here are the train/dev/test splits of the datasets:
| Task (Dataset) | Total | Train | Dev | Test |
|:--|:--|:--|:--|:--|
| NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 |
| POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
| STS | 3,073 | 2,073 | 500 | 500 |
| TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786|
| QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 |
_The fine-tuning on downstream tasks have been performed with the HuggingFace [**Transformers**](https://github.com/huggingface/transformers) library_
## Results
Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and
the Catalan WikiBERT-ca model
| Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) |
| ------------|:-------------:| -----:|:------|:-------|:------|:----|
| BERTa | **88.13** | **98.97** | **79.73** | **74.16** | **86.97/72.29** | **68.89/48.87** |
| mBERT | 86.38 | 98.82 | 76.34 | 70.56 | 86.97/72.22 | 67.15/46.51 |
| XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 |
| WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 |
## Intended uses & limitations
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition.
---
## Using BERTa
## Load model and tokenizer
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("BSC-TeMU/roberta-base-ca-cased")
model = AutoModelForMaskedLM.from_pretrained("BSC-TeMU/roberta-base-ca-cased")
```
## Fill Mask task
Below, an example of how to use the masked language modelling task with a pipeline.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='BSC-TeMU/roberta-base-ca-cased')
>>> unmasker("Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.")
[
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.4177263379096985,
"token": 734,
"token_str": " Barcelona"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.10696165263652802,
"token": 3849,
"token_str": " Badalona"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.08135009557008743,
"token": 19349,
"token_str": " Collserola"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.07330769300460815,
"token": 4974,
"token_str": " Terrassa"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.03317456692457199,
"token": 14333,
"token_str": " Gavà"
}
]
```
This model was originally published as [bsc/roberta-base-ca-cased](https://huggingface.co/bsc/roberta-base-ca-cased). | c09007ffa339d39c0e4bc1181a48f2d0 |
Brokette/wav2vec2-base-timit-test5 | Brokette | wav2vec2 | 13 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 979 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-test5
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.1.dev0
- Tokenizers 0.11.6
| b267d61ce724ba2c6843435d8e2cc1e5 |
speechbrain/tts-hifigan-libritts-22050Hz | speechbrain | null | 4 | 293 | speechbrain | 1 | text-to-speech | false | false | false | apache-2.0 | ['en'] | ['LibriTTS'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['Vocoder', 'HiFIGAN', 'text-to-speech', 'TTS', 'speech-synthesis', 'speechbrain'] | false | true | true | 2,927 | false |
# Vocoder with HiFIGAN trained on LibriTTS
This repository provides all the necessary tools for using a [HiFIGAN](https://arxiv.org/abs/2010.05646) vocoder trained with [LibriTTS](https://www.openslr.org/60/) (with multiple speakers). The sample rate used for the vocoder is 22050 Hz.
The pre-trained model takes in input a spectrogram and produces a waveform in output. Typically, a vocoder is used after a TTS model that converts an input text into a spectrogram.
Alternatives to this models are the following:
- [tts-hifigan-libritts-16kHz](https://huggingface.co/speechbrain/tts-hifigan-libritts-16kHz/) (same model trained on the same dataset, but for a sample rate of 16000 Hz)
- [tts-hifigan-ljspeech](https://huggingface.co/speechbrain/tts-hifigan-ljspeech) (same model trained on LJSpeech for a sample rate of 22050 Hz).
## Install SpeechBrain
```bash
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Using the Vocoder
```python
import torch
from speechbrain.pretrained import HIFIGAN
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-22050Hz", savedir="tmpdir")
mel_specs = torch.rand(2, 80,298)
# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_specs)
```
### Using the Vocoder with the TTS
```python
import torchaudio
from speechbrain.pretrained import Tacotron2
from speechbrain.pretrained import HIFIGAN
# Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir="tmpdir_tts")
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-22050Hz", savedir="tmpdir_vocoder")
# Running the TTS
mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb")
# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_output)
# Save the waverform
torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LibriTTS/vocoder/hifigan/
python train.py hparams/train.yaml --data_folder=/path/to/LibriTTS_data_destination --sample_rate=22050
```
To change the sample rate for model training go to the `"recipes/LibriTTS/vocoder/hifigan/hparams/train.yaml"` file and change the value for `sample_rate` as required.
The training logs and checkpoints are available [here](https://drive.google.com/drive/folders/1cImFzEonNYhetS9tmH9R_d0EFXXN0zpn?usp=sharing). | 3b6f7612b86e3fd3b46cdb3e1019ad3b |
jonatasgrosman/exp_w2v2t_id_no-pretraining_s724 | jonatasgrosman | wav2vec2 | 10 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'id'] | false | true | true | 414 | false | # exp_w2v2t_id_no-pretraining_s724
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| a516974d948d015ca490121a51dfc905 |
shed-e/Summary | shed-e | mt5 | 14 | 3 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['summarization', 'generated_from_trainer'] | true | true | true | 1,973 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0318
- Rouge1: 0.1806
- Rouge2: 0.0917
- Rougel: 0.1765
- Rougelsum: 0.1766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 6.6121 | 1.0 | 1209 | 3.2969 | 0.1522 | 0.0635 | 0.1476 | 0.147 |
| 3.901 | 2.0 | 2418 | 3.1307 | 0.1672 | 0.0821 | 0.1604 | 0.1602 |
| 3.5788 | 3.0 | 3627 | 3.0910 | 0.1804 | 0.0922 | 0.1748 | 0.1751 |
| 3.4198 | 4.0 | 4836 | 3.0646 | 0.1717 | 0.0813 | 0.167 | 0.1664 |
| 3.321 | 5.0 | 6045 | 3.0659 | 0.1782 | 0.0877 | 0.1759 | 0.1756 |
| 3.2441 | 6.0 | 7254 | 3.0407 | 0.1785 | 0.088 | 0.1755 | 0.1751 |
| 3.2075 | 7.0 | 8463 | 3.0356 | 0.1789 | 0.09 | 0.1743 | 0.1747 |
| 3.1803 | 8.0 | 9672 | 3.0318 | 0.1806 | 0.0917 | 0.1765 | 0.1766 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| fd26923e3c775456d785574e86e963b9 |
OpenMatch/ance-tele_nq_psg-encoder | OpenMatch | bert | 7 | 2 | transformers | 0 | feature-extraction | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 899 | false | This model is the **passage** encoder of ANCE-Tele trained on NQ, described in the EMNLP 2022 paper ["Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives"](https://arxiv.org/pdf/2210.17167.pdf). The associated GitHub repository is available at https://github.com/OpenMatch/ANCE-Tele.
ANCE-Tele only trains with self-mined negatives (teleportation negatives) without using additional negatives (e.g., BM25, other DR systems) and eliminates the dependency on filtering strategies and distillation modules.
|NQ (Test)|R@5|R@20|R@20|
|:---|:---|:---|:---|
|ANCE-Tele|77.0|84.9|89.7|
```
@inproceedings{sun2022ancetele,
title={Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives},
author={Si Sun, Chenyan Xiong, Yue Yu, Arnold Overwijk, Zhiyuan Liu and Jie Bao},
booktitle={Proceedings of EMNLP 2022},
year={2022}
}
``` | ad96b794ed097adc2a9622d9ab300bc9 |
gonced8/godel-multiwoz | gonced8 | t5 | 7 | 344 | transformers | 1 | text2text-generation | true | false | false | gpl-3.0 | ['en'] | ['multi_woz_v22'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,423 | false |
Pretrained model: [GODEL-v1_1-base-seq2seq](https://huggingface.co/microsoft/GODEL-v1_1-base-seq2seq/)
Fine-tuning dataset: [MultiWOZ 2.2](https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2)
# How to use:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("gonced8/godel-multiwoz")
model = AutoModelForSeq2SeqLM.from_pretrained("gonced8/godel-multiwoz")
# Encoder input
context = [
"USER: I need train reservations from norwich to cambridge",
"SYSTEM: I have 133 trains matching your request. Is there a specific day and time you would like to travel?",
"USER: I'd like to leave on Monday and arrive by 18:00.",
]
input_text = " EOS ".join(context[-5:]) + " => "
model_inputs = tokenizer(
input_text, max_length=512, truncation=True, return_tensors="pt"
)["input_ids"]
# Decoder input
answer_start = "SYSTEM: "
decoder_input_ids = tokenizer(
"<pad>" + answer_start,
max_length=256,
truncation=True,
add_special_tokens=False,
return_tensors="pt",
)["input_ids"]
# Generate
output = model.generate(
model_inputs, decoder_input_ids=decoder_input_ids, max_length=256
)
output = tokenizer.decode(
output[0], clean_up_tokenization_spaces=True, skip_special_tokens=True
)
print(output)
# SYSTEM: TR4634 arrives at 17:35. Would you like me to book that for you?
``` | b46f859c4d13cae335c6b73c9c93decb |
pcuenq/stable-diffusion-v1-4 | pcuenq | null | 19 | 19 | diffusers | 4 | text-to-image | false | false | false | other | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | true | true | 13,634 | false |
# Stable Diffusion v1-4 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
```bash
pip install --upgrade diffusers transformers scipy
```
Run this command to log in with your HF Hub token if you haven't before:
```bash
huggingface-cli login
```
Running the pipeline with the default PNDM scheduler:
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=True)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
**Note**:
If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
```py
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
To swap out the noise scheduler, pass it to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-4"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("astronaut_rides_horse.png")
```
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide four checkpoints, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:
![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg)
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | eab95941de8795babd02f7686fca3166 |
OpenMatch/cocodr-large | OpenMatch | bert | 8 | 3 | transformers | 0 | fill-mask | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 430 | false |
This model has been pretrained on BEIR corpus without relevance-level supervision following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR.
This model is trained with BERT-large as the backbone with 335M hyperparameters.
| c0448001cb6a97623a4d5d36d814e323 |
sd-concepts-library/altyn-helmet | sd-concepts-library | null | 12 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,330 | false | ### Altyn-Helmet on Stable Diffusion
This is the `<Altyn>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<Altyn> 0](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/0.jpeg)
![<Altyn> 1](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/3.jpeg)
![<Altyn> 2](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/4.jpeg)
![<Altyn> 3](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/5.jpeg)
![<Altyn> 4](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/1.jpeg)
![<Altyn> 5](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/6.jpeg)
![<Altyn> 6](https://huggingface.co/sd-concepts-library/altyn-helmet/resolve/main/concept_images/2.jpeg)
| 6cd511e93bd8d9eab1aab381e6d28244 |
eugeneware/ddpm-butterflies-128 | eugeneware | null | 17 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,232 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/eugeneware/ddpm-butterflies-128/tensorboard?#scalars)
| 35c578ad4c5d5c2e82afee5910b4c73d |
takizawa/xlm-roberta-base-finetuned-panx-it | takizawa | xlm-roberta | 10 | 19 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| 7db12df5daa6292916e0b86799aa0ca1 |
tj-solergibert/distilbert-base-uncased-finetuned-emotion | tj-solergibert | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9285
- F1: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3085 | 0.915 | 0.9127 |
| 0.2493 | 2.0 | 500 | 0.2158 | 0.9285 | 0.9286 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| 54f0f1c050987403eec511a944763571 |
younes9/AI-DAY-distilbert-base-uncased-finetuned-cola | younes9 | distilbert | 13 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,579 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AI-DAY-distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7236
- Matthews Correlation: 0.5382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5308 | 1.0 | 535 | 0.5065 | 0.4296 |
| 0.3565 | 2.0 | 1070 | 0.5109 | 0.4940 |
| 0.2399 | 3.0 | 1605 | 0.6056 | 0.5094 |
| 0.1775 | 4.0 | 2140 | 0.7236 | 0.5382 |
| 0.1242 | 5.0 | 2675 | 0.8659 | 0.5347 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
| 195929f5278b814f83e890f62ea1f838 |
sd-concepts-library/gymnastics-leotard-v2 | sd-concepts-library | null | 11 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,386 | false | ### Gymnastics Leotard v2 on Stable Diffusion
This is the `<gymnastics-leotard2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<gymnastics-leotard2> 0](https://huggingface.co/sd-concepts-library/gymnastics-leotard-v2/resolve/main/concept_images/3.jpeg)
![<gymnastics-leotard2> 1](https://huggingface.co/sd-concepts-library/gymnastics-leotard-v2/resolve/main/concept_images/0.jpeg)
![<gymnastics-leotard2> 2](https://huggingface.co/sd-concepts-library/gymnastics-leotard-v2/resolve/main/concept_images/5.jpeg)
![<gymnastics-leotard2> 3](https://huggingface.co/sd-concepts-library/gymnastics-leotard-v2/resolve/main/concept_images/1.jpeg)
![<gymnastics-leotard2> 4](https://huggingface.co/sd-concepts-library/gymnastics-leotard-v2/resolve/main/concept_images/2.jpeg)
![<gymnastics-leotard2> 5](https://huggingface.co/sd-concepts-library/gymnastics-leotard-v2/resolve/main/concept_images/4.jpeg)
| bbf7d6a3550ea992c533fed34e5c7ffa |
pranav-s/MaterialsBERT | pranav-s | bert | 7 | 5 | transformers | 0 | feature-extraction | true | false | false | other | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['transformers', 'feature-extraction', 'materials'] | false | true | true | 2,980 | false |
# MaterialsBERT
This model is a fine-tuned version of [PubMedBERT model](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on a dataset of 2.4 million materials science abstracts.
It was introduced in [this](https://arxiv.org/abs/2209.13136) paper. This model is uncased.
## Model description
Domain-specific fine-tuning has been [shown](https://arxiv.org/abs/2007.15779) to improve performance in downstream performance on a variety of NLP tasks. MaterialsBERT fine-tunes PubMedBERT, a pre-trained language model trained using biomedical literature. This model was chosen as the biomedical domain is close to the materials science domain. MaterialsBERT when further fine-tuned on a variety of downstream sequence labeling tasks in materials science, outperformed other baseline language models tested on three out of five datasets.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on materials-science relevant downstream tasks.
Note that this model is primarily aimed at being fine-tuned on tasks that use a sentence or a paragraph (potentially masked)
to make decisions, such as sequence classification, token classification or question answering.
## How to Use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertForMaskedLM, BertTokenizer
tokenizer = BertTokenizer.from_pretrained('pranav-s/MaterialsBERT')
model = BertForMaskedLM.from_pretrained('pranav-s/MaterialsBERT')
text = "Enter any text you like"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
A fine-tuning corpus of 2.4 million materials science abstracts was used. The DOI's of the journal articles used are provided in the file training_DOI.txt
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
## Citation
If you find MaterialsBERT useful in your research, please cite the following paper:
```latex
@misc{materialsbert,
author = {Pranav Shetty, Arunkumar Chitteth Rajan, Christopher Kuenneth, Sonkakshi Gupta, Lakshmi Prerana Panchumarti, Lauren Holm, Chao Zhang, and Rampi Ramprasad},
title = {A general-purpose material property data extraction pipeline from large polymer corpora using Natural Language Processing},
year = {2022},
eprint = {arXiv:2209.13136},
}
```
<a href="https://huggingface.co/exbert/?model=pranav-s/MaterialsBERT">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| 6daa73922c7ae58145599643f796a8ce |
Pablo94/racism-finetuned-detests-02-11-2022 | Pablo94 | roberta | 13 | 5 | transformers | 0 | text-classification | true | false | false | cc | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,041 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# racism-finetuned-detests-02-11-2022
This model is a fine-tuned version of [davidmasip/racism](https://huggingface.co/davidmasip/racism) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8819
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3032 | 0.64 | 25 | 0.3482 | 0.6434 |
| 0.1132 | 1.28 | 50 | 0.3707 | 0.6218 |
| 0.1253 | 1.92 | 75 | 0.4004 | 0.6286 |
| 0.0064 | 2.56 | 100 | 0.6223 | 0.6254 |
| 0.0007 | 3.21 | 125 | 0.7347 | 0.6032 |
| 0.0006 | 3.85 | 150 | 0.7705 | 0.6312 |
| 0.0004 | 4.49 | 175 | 0.7988 | 0.6304 |
| 0.0003 | 5.13 | 200 | 0.8206 | 0.6255 |
| 0.0003 | 5.77 | 225 | 0.8371 | 0.6097 |
| 0.0003 | 6.41 | 250 | 0.8503 | 0.6148 |
| 0.0003 | 7.05 | 275 | 0.8610 | 0.6148 |
| 0.0002 | 7.69 | 300 | 0.8693 | 0.6199 |
| 0.0002 | 8.33 | 325 | 0.8755 | 0.6199 |
| 0.0002 | 8.97 | 350 | 0.8797 | 0.6199 |
| 0.0002 | 9.62 | 375 | 0.8819 | 0.6199 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 7b2223f3d38febe9606bcaed8a213ba8 |
Helsinki-NLP/opus-mt-zne-fr | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-zne-fr
* source languages: zne
* target languages: fr
* OPUS readme: [zne-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fr | 25.3 | 0.416 |
| 27630f022f2961ae047780d720fa853c |
coreml/coreml-knollingcase | coreml | null | 4 | 0 | null | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['coreml', 'stable-diffusion', 'text-to-image'] | false | true | true | 3,564 | false | # Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# knollingcase:
Source(s): [Hugging Face](https://huggingface.co/Aybeeceedee/knollingcase) - [CivitAI](https://civitai.com/models/1092/knollingcase)
### knollingcase Dreambooth model trained by Aybeeceedee with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
UPDATE: The images on the following imgur link were all made just using 'knollingcase' as the main prompt, then letting the Dynamic Prompt add-on for AUTOMATIC1111 fill in the rest! There's some beauty's in here if you have a scroll through!
https://imgur.com/gallery/MQtdkv5
Use 'knollingcase' anywhere in the prompt and you're good to go.
Example: knollingcase, isometic render, a single cherry blossom tree, isometric display case, knolling teardown, transparent data visualization infographic, high-resolution OLED GUI interface display, micro-details, octane render, photorealism, photorealistic
Example: (clockwork:1.2), knollingcase, labelled, overlays, oled display, annotated, technical, knolling diagram, technical drawing, display case, dramatic lighting, glow, dof, reflections, refractions
<img src="https://preview.redd.it/998kvw5cja2a1.png?width=640&crop=smart&auto=webp&s=74b9d242271215b1b75e7938e1f5ca7259ccbf50" width="512px" />
<img src="https://preview.redd.it/2zwc1aul3c2a1.png?width=1024&format=png&auto=webp&s=74c05cc9a3f21654d6b559a62ffda9a7d7da2955" width="512px" />
<img src="https://preview.redd.it/sindy9s24c2a1.png?width=1024&format=png&auto=webp&s=cffbfe28810e646b73f7499b0ebc512a2d85fba2" width="512px" />
<img src="https://preview.redd.it/lpg5b2yqja2a1.png?width=640&crop=smart&auto=webp&s=f10965c95439df5724a8d6e5fa45f111d6f9f0d6" width="512px" />
<img src="https://preview.redd.it/qxa04u7qja2a1.png?width=640&crop=smart&auto=webp&s=fe7b86632241ceff5c79992de2abb064878dfd49" width="512px" />
<img src="https://preview.redd.it/my747twpja2a1.png?width=640&crop=smart&auto=webp&s=e576e16e9e34168209ce0b51bc2d52a139f2dd30" width="512px" />
<img src="https://preview.redd.it/lzrbu1vcja2a1.png?width=640&crop=smart&auto=webp&s=494437df5ce587ac15fee80957455ff999e26af9" width="512px" />
<img src="https://preview.redd.it/zalueljcja2a1.png?width=640&crop=smart&auto=webp&s=550e420d6ea9c89581394f298e947cdf9a240012" width="512px" />
<img src="https://preview.redd.it/8zxn9t8bja2a1.png?width=640&crop=smart&auto=webp&s=dc97582482573a39ec4fabe7a03deb62332df5ed" width="512px" />
<img src="https://preview.redd.it/zeiuwkwaja2a1.png?width=640&crop=smart&auto=webp&s=2f13fa6386b302ab35df2ed9ea328f7f442cbad7" width="512px" />
<img src="https://preview.redd.it/3l43tqa23c2a1.png?width=1024&format=png&auto=webp&s=79dab20b3d2d8b8c963a67f7aefe1d1b724a86d8" width="512px" />
| c8b72ca319691d149943e43757cdb137 |
lirondos/anglicisms-spanish-mbert | lirondos | bert | 8 | 14 | transformers | 0 | token-classification | true | false | false | cc-by-4.0 | ['es'] | ['coalas'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['anglicisms', 'loanwords', 'borrowing', 'codeswitching', 'arxiv:2203.16169'] | false | true | true | 4,423 | false |
# anglicisms-spanish-mbert
This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*.
The model is a fine-tuned version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings.
The model considers two labels:
* ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*)
* ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*)
The model uses BIO encoding to account for multitoken borrowings.
**⚠ This is not the best-performing model for this task.** For the best-performing model (F1=85.76) see [Flair model](https://huggingface.co/lirondos/anglicisms-spanish-flair-cs).
## Metrics (on the test set)
The following table summarizes the results obtained on the test set of the [COALAS](https://github.com/lirondos/coalas/) corpus.
| LABEL | Precision | Recall | F1 |
|:-------|-----:|-----:|---------:|
| ALL | 88.09 | 79.46 | 83.55 |
| ENG | 88.44 | 82.16 | 85.19 |
| OTHER | 37.5 | 6.52 | 11.11 |
## Dataset
This model was trained on [COALAS](https://github.com/lirondos/coalas/), a corpus of Spanish newswire annotated with unassimilated lexical borrowings. The corpus contains 370,000 tokens and includes various written media written in European Spanish. The test set was designed to be as difficult as possible: it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense (20 borrowings per 1,000 tokens).
|Set | Tokens | ENG | OTHER | Unique |
|:-------|-----:|-----:|---------:|---------:|
|Training |231,126 |1,493 | 28 |380 |
|Development |82,578 |306 |49 |316|
|Test |58,997 |1,239 |46 |987|
|**Total** |372,701 |3,038 |123 |1,683 |
## More info
More information about the dataset, model experimentation and error analysis can be found in the paper: *[Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/)*.
## How to use
```
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("lirondos/anglicisms-spanish-mbert")
model = AutoModelForTokenClassification.from_pretrained("lirondos/anglicisms-spanish-mbert")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = example = "Buscamos data scientist para proyecto de machine learning."
borrowings = nlp(example)
print(borrowings)
```
## Citation
If you use this model, please cite the following reference:
```
@inproceedings{alvarez-mellado-lignos-2022-detecting,
title = "Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling",
author = "{\'A}lvarez-Mellado, Elena and
Lignos, Constantine",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.268",
pages = "3868--3888",
abstract = "This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.",
}
```
| f9e43fae3cf541203ed004fd333e121a |
Wiebke/distilbert-base-uncasedsexist_baseline | Wiebke | distilbert | 16 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 950 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncasedsexist_baseline
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| c338782a463c66032b1cd97d1d39424d |
bayartsogt/whisper-small-mn-7 | bayartsogt | whisper | 19 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['mozilla-foundation/common_voice_11_0', 'google/fleurs', 'bayartsogt/ulaanbal-v0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer'] | true | true | true | 2,850 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-mn-7
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3061
- Wer: 32.6469
- Cer: 11.2319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 15000
- mixed_precision_training: Native AMP
### Training script
```bash
python train.py \
--train_datasets "mozilla-foundation/common_voice_11_0|mn|train+validation,google/fleurs|mn_mn|train+validation,bayartsogt/ulaanbal-v0||train" \
--eval_datasets "mozilla-foundation/common_voice_11_0|mn|test" \
--whisper-size "small" \
--language "mn,Mongolian" \
--keep-chars " абвгдеёжзийклмноөпрстуүфхцчшъыьэюя.,?!" \
--train-batch-size 32 \
--eval-batch-size 32 \
--max-steps 15000 \
--num-workers 8 \
--version 7 \
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.3416 | 0.61 | 1000 | 0.4335 | 51.0979 | 17.8608 |
| 0.2266 | 1.22 | 2000 | 0.3383 | 39.5346 | 13.6468 |
| 0.2134 | 1.83 | 3000 | 0.2994 | 35.6565 | 12.1677 |
| 0.165 | 2.43 | 4000 | 0.2927 | 34.1927 | 11.4602 |
| 0.1205 | 3.04 | 5000 | 0.2879 | 33.5209 | 11.3002 |
| 0.1284 | 3.65 | 6000 | 0.2884 | 32.7507 | 10.9885 |
| 0.0893 | 4.26 | 7000 | 0.3022 | 33.0894 | 11.2075 |
| 0.0902 | 4.87 | 8000 | 0.3061 | 32.6469 | 11.2319 |
| 0.065 | 5.48 | 9000 | 0.3233 | 32.8163 | 11.1595 |
| 0.0436 | 6.09 | 10000 | 0.3372 | 32.6852 | 11.1384 |
| 0.0469 | 6.7 | 11000 | 0.3481 | 32.8272 | 11.2867 |
| 0.0292 | 7.3 | 12000 | 0.3643 | 33.0784 | 11.3785 |
| 0.0277 | 7.91 | 13000 | 0.3700 | 33.1877 | 11.3600 |
| 0.0196 | 8.52 | 14000 | 0.3806 | 33.3734 | 11.4273 |
| 0.016 | 9.13 | 15000 | 0.3844 | 33.3188 | 11.4248 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| d45e15df00b0c95e25608d3ede741bb8 |
gokuls/distilbert_add_GLUE_Experiment_logit_kd_qnli_96 | gokuls | distilbert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,813 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_qnli_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3984
- Accuracy: 0.5806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4153 | 1.0 | 410 | 0.4114 | 0.5054 |
| 0.4152 | 2.0 | 820 | 0.4115 | 0.5054 |
| 0.4129 | 3.0 | 1230 | 0.4020 | 0.5686 |
| 0.3995 | 4.0 | 1640 | 0.3984 | 0.5806 |
| 0.3934 | 5.0 | 2050 | 0.3992 | 0.5794 |
| 0.3888 | 6.0 | 2460 | 0.4024 | 0.5810 |
| 0.385 | 7.0 | 2870 | 0.4105 | 0.5675 |
| 0.3808 | 8.0 | 3280 | 0.4050 | 0.5777 |
| 0.3768 | 9.0 | 3690 | 0.4074 | 0.5722 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 7b4eb8302f75bdadaa58c003adf0a4b0 |
tejavoo/my_awesome_qa_model | tejavoo | distilbert | 12 | 1 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,259 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4442 |
| 2.7857 | 2.0 | 500 | 1.7960 |
| 2.7857 | 3.0 | 750 | 1.7293 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| ae71bdc66897940992881dc48c888482 |
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_sst2_256 | gokuls | distilbert | 17 | 2 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,751 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_sst2_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7397
- Accuracy: 0.8108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1137 | 1.0 | 264 | 0.7619 | 0.8016 |
| 0.5525 | 2.0 | 528 | 0.7758 | 0.8050 |
| 0.4209 | 3.0 | 792 | 0.7397 | 0.8108 |
| 0.3585 | 4.0 | 1056 | 0.8179 | 0.8085 |
| 0.3153 | 5.0 | 1320 | 0.8172 | 0.7982 |
| 0.2824 | 6.0 | 1584 | 0.8974 | 0.8096 |
| 0.2512 | 7.0 | 1848 | 0.9205 | 0.7924 |
| 0.2315 | 8.0 | 2112 | 0.9320 | 0.8016 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 5afc51fdf051691cb64120c28cb6cd60 |
taraxis/nekotest1-1 | taraxis | null | 17 | 0 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora'] | false | true | true | 503 | false | # LoRA DreamBooth - nekotest1-1
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "nekotst" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: a tabby cat
![image_0](test_images/image_0.png)
![image_1](test_images/image_1.png)
![image_2](test_images/image_2.png)
![image_3](test_images/image_3.png)
| 529c96b2d5701be811794e340cdf4d2a |
coreml/coreml-elldreths-dream-mix | coreml | null | 5 | 0 | null | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['coreml', 'stable-diffusion', 'text-to-image'] | false | true | true | 1,105 | false | # Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-files-to-Core-ML).<br>
- Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
# Note: This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# Elldreth's Dream Mix:
Source(s): [CivitAI](https://civitai.com/models/1254/elldreths-dream-mix)
This mixed model is a combination of some of my favorites. A little Pyros Model A mixed with a F111-sd14 diff and mixed into Anything.
What's it good at?
Portraits
Landscapes
Fantasy
Sci-Fi
Anime
Semi-realistic
Horror
It's an all-around easy-to-prompt general purpose model that cranks out some really nice images. No trigger words required. All models were scanned prior to mixing and totally safe. | bf708f6385191706851f219289315692 |
MartinoMensio/racism-models-w-m-vote-strict-epoch-3 | MartinoMensio | bert | 4 | 125 | transformers | 0 | text-classification | true | false | false | mit | ['es'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 4,100 | false |
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `w-m-vote-strict-epoch-3`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'w-m-vote-strict-epoch-3'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.9619585871696472}, {'label': 'non-racist', 'score': 0.9396700859069824}]
```
For more details, see https://github.com/preyero/neatclass22
| 20fbe4c6b7d57265113f4d8a675d2a45 |
rsuwaileh/IDRISI-LMR-AR-random-typebased | rsuwaileh | bert | 8 | 26 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,763 | false |
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text and predicts their location types. The location type can be coarse-grained (e.g., country, city, etc.) and fine-grained (e.g., street, POI, etc.)
The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-based` LMR mode and using the `Random` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/AR/gold-random-bilou/). More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/)
* English models are also available:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite the models:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
| b4fc4aa0aa921107d0f0366513ac620b |
henryscheible/eval_wnli | henryscheible | bert | 11 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 885 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| 9c31080a0ff2274e2710f68e02cd4fa3 |
Gozdi/Electra-base-squad-adversarialqa-epoch-2 | Gozdi | electra | 8 | 3 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,446 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Electra-base-squad-adversarialqa-epoch-2
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9140
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 43062, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1104, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.9140 | 1 |
### Framework versions
- Transformers 4.21.2
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
| 3d5c667c841500d0c755a63b8042039c |
graphcore-rahult/roberta-base-finetuned-swag | graphcore-rahult | roberta | 12 | 2 | transformers | 0 | null | true | false | false | mit | null | ['swag'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,467 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-swag
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5190
- Accuracy: 0.8260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- total_eval_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0993 | 1.0 | 2298 | 0.5474 | 0.7871 |
| 0.2222 | 2.0 | 4596 | 0.4744 | 0.8181 |
| 0.1633 | 3.0 | 6894 | 0.5190 | 0.8260 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
| 06b3af2debaf03bdc630886a3a5f66d9 |
sd-concepts-library/my-hero-academia-style | sd-concepts-library | null | 13 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,559 | false | ### My hero academia style on Stable Diffusion
This is the `<MHA style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<MHA style> 0](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/5.jpeg)
![<MHA style> 1](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/6.jpeg)
![<MHA style> 2](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/3.jpeg)
![<MHA style> 3](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/0.jpeg)
![<MHA style> 4](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/2.jpeg)
![<MHA style> 5](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/7.jpeg)
![<MHA style> 6](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/1.jpeg)
![<MHA style> 7](https://huggingface.co/sd-concepts-library/my-hero-academia-style/resolve/main/concept_images/4.jpeg)
| 3d20d007369c9a88d5549aff356b86ff |
HrayrMSint/distilbert-base-uncased-finetuned-clinc | HrayrMSint | distilbert | 9 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['clinc_oos'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,475 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2843 | 1.0 | 318 | 3.2793 | 0.7448 |
| 2.6208 | 2.0 | 636 | 1.8750 | 0.8297 |
| 1.5453 | 3.0 | 954 | 1.1565 | 0.8919 |
| 1.0141 | 4.0 | 1272 | 0.8628 | 0.9090 |
| 0.795 | 5.0 | 1590 | 0.7771 | 0.9135 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
| ba3c67eba3d5c4ff19e613d0e1704bf3 |
google/multiberts-seed_1 | google | bert | 8 | 271 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_1'] | false | true | true | 3,328 | false |
# MultiBERTs - Seed 1
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1.
## Model Description
This model is a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1')
model = TFBertModel.from_pretrained("google/multiberts-seed_1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1')
model = BertModel.from_pretrained("google/multiberts-seed_1")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| edf9f34b3fca0c086fc8861d33159afd |
Teklia/pylaia-huginmunin-postprocessed | Teklia | null | 8 | 0 | PyLaia | 0 | null | false | false | false | mit | ['no'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['PyLaia', 'PyTorch', 'Handwritten text recognition'] | false | true | true | 2,085 | false |
# Hugin-Munin handwritten text recognition
This model performs Handwritten Text Recognition in Norwegian. It was was developed during the [HUGIN-MUNIN project](https://hugin-munin-project.github.io/).
## Model description
The model has been trained using the PyLaia library on the [NorHand](https://zenodo.org/record/6542056) document images.
Line bounding boxes were improved using a post-processing step.
Training images were resized with a fixed height of 128 pixels, keeping the original aspect ratio.
## Evaluation results
The model achieves the following results:
| set | CER (%) | WER (%) |
| ----- | ---------- | --------- |
| train | 2.33 | 5.62 |
| val | 8.20 | 24.75 |
| test | 7.81 | 23.3 |
Results improve on validation and test sets when PyLaia is combined with a 6-gram language model.
The language model is trained on [this text corpus](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-73/) published by the National Library of Norway.
| set | CER (%) | WER (%) |
| ----- | ---------- | --------- |
| train | 2.62 | 6.13 |
| val | 7.01 | 19.75 |
| test | 6.75 | 18.22 |
## How to use
Please refer to the PyLaia library page (https://pypi.org/project/pylaia/) to use this model.
# Cite us!
```bibtex
@inproceedings{10.1007/978-3-031-06555-2_27,
author = {Maarand, Martin and Beyer, Yngvil and K\r{a}sen, Andre and Fosseide, Knut T. and Kermorvant, Christopher},
title = {A Comprehensive Comparison of Open-Source Libraries for Handwritten Text Recognition in Norwegian},
year = {2022},
isbn = {978-3-031-06554-5},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
url = {https://doi.org/10.1007/978-3-031-06555-2_27},
doi = {10.1007/978-3-031-06555-2_27},
booktitle = {Document Analysis Systems: 15th IAPR International Workshop, DAS 2022, La Rochelle, France, May 22–25, 2022, Proceedings},
pages = {399–413},
numpages = {15},
keywords = {Norwegian language, Open-source, Handwriting recognition},
location = {La Rochelle, France}
}
``` | 2280faa643ac252ace9af0e976757b2e |
meongracun/nmt-mpst-id-en-lr_0.0001-ep_30-seq_128_bs-32 | meongracun | t5 | 9 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,239 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.0001-ep_30-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8218
- Bleu: 0.1371
- Meteor: 0.294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 2.6357 | 0.042 | 0.1513 |
| No log | 2.0 | 404 | 2.4891 | 0.0526 | 0.1749 |
| 2.781 | 3.0 | 606 | 2.3754 | 0.062 | 0.1918 |
| 2.781 | 4.0 | 808 | 2.2946 | 0.0693 | 0.2047 |
| 2.4692 | 5.0 | 1010 | 2.2262 | 0.0779 | 0.2175 |
| 2.4692 | 6.0 | 1212 | 2.1729 | 0.0825 | 0.2231 |
| 2.4692 | 7.0 | 1414 | 2.1226 | 0.0897 | 0.2328 |
| 2.2484 | 8.0 | 1616 | 2.0789 | 0.0932 | 0.2381 |
| 2.2484 | 9.0 | 1818 | 2.0450 | 0.1007 | 0.2478 |
| 2.099 | 10.0 | 2020 | 2.0132 | 0.1041 | 0.255 |
| 2.099 | 11.0 | 2222 | 1.9818 | 0.1085 | 0.2584 |
| 2.099 | 12.0 | 2424 | 1.9608 | 0.113 | 0.2639 |
| 1.9729 | 13.0 | 2626 | 1.9422 | 0.1165 | 0.2689 |
| 1.9729 | 14.0 | 2828 | 1.9223 | 0.1186 | 0.2717 |
| 1.8885 | 15.0 | 3030 | 1.9114 | 0.1219 | 0.2757 |
| 1.8885 | 16.0 | 3232 | 1.9020 | 0.1238 | 0.2794 |
| 1.8885 | 17.0 | 3434 | 1.8827 | 0.1254 | 0.2793 |
| 1.8171 | 18.0 | 3636 | 1.8762 | 0.1278 | 0.2824 |
| 1.8171 | 19.0 | 3838 | 1.8686 | 0.1298 | 0.285 |
| 1.7597 | 20.0 | 4040 | 1.8595 | 0.1307 | 0.2864 |
| 1.7597 | 21.0 | 4242 | 1.8533 | 0.1328 | 0.2891 |
| 1.7597 | 22.0 | 4444 | 1.8453 | 0.1335 | 0.2901 |
| 1.7183 | 23.0 | 4646 | 1.8400 | 0.1347 | 0.2912 |
| 1.7183 | 24.0 | 4848 | 1.8342 | 0.135 | 0.2914 |
| 1.6893 | 25.0 | 5050 | 1.8308 | 0.1355 | 0.2919 |
| 1.6893 | 26.0 | 5252 | 1.8258 | 0.1357 | 0.2924 |
| 1.6893 | 27.0 | 5454 | 1.8248 | 0.1365 | 0.2933 |
| 1.6667 | 28.0 | 5656 | 1.8233 | 0.137 | 0.294 |
| 1.6667 | 29.0 | 5858 | 1.8223 | 0.1371 | 0.2941 |
| 1.6585 | 30.0 | 6060 | 1.8218 | 0.1371 | 0.294 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
| a0e3b4ba735587b7381e60a840febd7c |
davanstrien/dataset_mentions2 | davanstrien | albert | 13 | 10 | sentence-transformers | 1 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | true | true | 1,424 | false |
# davanstrien/dataset_mentions2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("davanstrien/dataset_mentions2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 2b32208f09532f39112346af133931d6 |
osanseviero/test_sentence_transformers2 | osanseviero | bert | 12 | 5 | sentence-transformers | 0 | sentence-similarity | true | false | false | apache-2.0 | null | ['flax-sentence-embeddings/stackexchange_xml', 's2orc', 'ms_marco', 'wiki_atomic_edits', 'snli', 'multi_nli', 'embedding-data/altlex', 'embedding-data/simple-wiki', 'embedding-data/flickr30k-captions', 'embedding-data/coco_captions', 'embedding-data/sentence-compression', 'embedding-data/QQP', 'yahoo_answers_topics'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 3,543 | false |
# sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L3-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L3-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | e48534f0797f44dfbb873a955befe07e |
fathyshalab/all-roberta-large-v1-kitchen_and_dining-1-16-5 | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,523 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-kitchen_and_dining-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3560
- Accuracy: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7421 | 1.0 | 1 | 2.5878 | 0.2012 |
| 2.1065 | 2.0 | 2 | 2.4975 | 0.2012 |
| 1.5994 | 3.0 | 3 | 2.4274 | 0.2249 |
| 1.1739 | 4.0 | 4 | 2.3808 | 0.2456 |
| 1.083 | 5.0 | 5 | 2.3560 | 0.2692 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| c2ae7904af17078605bac5ad97f7035b |
xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text | xfbai | bart | 4 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['AMRBART'] | false | true | true | 1,719 | false |
## AMRBART-large-finetuned-AMR2.0-AMR2Text
This model is a fine-tuned version of [AMRBART-large](https://huggingface.co/xfbai/AMRBART-large) on an AMR2.0 dataset. It achieves a sacre-bleu score of 45.7 on the evaluation set: More details are introduced in the paper: [Graph Pre-training for AMR Parsing and Generation](https://arxiv.org/pdf/2203.07836.pdf) by bai et al. in ACL 2022.
## Model description
Same with AMRBART.
## Training data
The model is finetuned on [AMR2.0](https://catalog.ldc.upenn.edu/LDC2020T02), a dataset consisting of 36,521
training instances, 1,368 validation instances, and 1,371 test instances.
## Intended uses & limitations
You can use the model for AMR-to-text generation, but it's mostly intended to be used in the domain of News.
## How to use
Here is how to initialize this model in PyTorch:
```python
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR2.0-AMR2Text")
```
Please refer to [this repository](https://github.com/muyeby/AMRBART) for tokenizer initialization and data preprocessing.
## BibTeX entry and citation info
Please cite this paper if you find this model helpful
```bibtex
@inproceedings{bai-etal-2022-graph,
title = "Graph Pre-training for {AMR} Parsing and Generation",
author = "Bai, Xuefeng and
Chen, Yulong and
Zhang, Yue",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "todo",
doi = "todo",
pages = "todo"
}
``` | fd858b783fff80ecd19c78fcf8728232 |
responsibility-framing/predict-perception-xlmr-cause-none | responsibility-framing | xlm-roberta | 12 | 21 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 10,563 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-none
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8639
- Rmse: 1.3661
- Rmse Cause::a Spontanea, priva di un agente scatenante: 1.3661
- Mae: 1.0795
- Mae Cause::a Spontanea, priva di un agente scatenante: 1.0795
- R2: -1.7872
- R2 Cause::a Spontanea, priva di un agente scatenante: -1.7872
- Cos: -0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3501
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Spontanea, priva di un agente scatenante | Mae | Mae Cause::a Spontanea, priva di un agente scatenante | R2 | R2 Cause::a Spontanea, priva di un agente scatenante | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------------:|:------:|:-----------------------------------------------------:|:-------:|:----------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0626 | 1.0 | 15 | 0.6787 | 0.8244 | 0.8244 | 0.7453 | 0.7453 | -0.0149 | -0.0149 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0186 | 2.0 | 30 | 0.6769 | 0.8233 | 0.8233 | 0.7457 | 0.7457 | -0.0122 | -0.0122 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0346 | 3.0 | 45 | 0.6812 | 0.8259 | 0.8259 | 0.7489 | 0.7489 | -0.0187 | -0.0187 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.9481 | 4.0 | 60 | 1.0027 | 1.0020 | 1.0020 | 0.8546 | 0.8546 | -0.4994 | -0.4994 | -0.3043 | 0.0 | 0.5 | 0.2579 | nan |
| 0.8838 | 5.0 | 75 | 0.9352 | 0.9677 | 0.9677 | 0.8463 | 0.8463 | -0.3985 | -0.3985 | -0.2174 | 0.0 | 0.5 | 0.2966 | nan |
| 0.7971 | 6.0 | 90 | 0.9396 | 0.9700 | 0.9700 | 0.8608 | 0.8608 | -0.4050 | -0.4050 | -0.2174 | 0.0 | 0.5 | 0.3156 | nan |
| 0.8182 | 7.0 | 105 | 0.9485 | 0.9746 | 0.9746 | 0.8509 | 0.8509 | -0.4184 | -0.4184 | -0.1304 | 0.0 | 0.5 | 0.2788 | nan |
| 0.696 | 8.0 | 120 | 1.1396 | 1.0682 | 1.0682 | 0.9309 | 0.9309 | -0.7041 | -0.7041 | -0.1304 | 0.0 | 0.5 | 0.2899 | nan |
| 0.6337 | 9.0 | 135 | 1.3064 | 1.1437 | 1.1437 | 0.9612 | 0.9612 | -0.9536 | -0.9536 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.5308 | 10.0 | 150 | 1.2403 | 1.1144 | 1.1144 | 0.9359 | 0.9359 | -0.8547 | -0.8547 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.5226 | 11.0 | 165 | 1.3433 | 1.1597 | 1.1597 | 0.9542 | 0.9542 | -1.0087 | -1.0087 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.474 | 12.0 | 180 | 1.5321 | 1.2386 | 1.2386 | 1.0340 | 1.0340 | -1.2910 | -1.2910 | -0.3043 | 0.0 | 0.5 | 0.3205 | nan |
| 0.3899 | 13.0 | 195 | 1.6322 | 1.2784 | 1.2784 | 1.0083 | 1.0083 | -1.4408 | -1.4408 | -0.3043 | 0.0 | 0.5 | 0.3590 | nan |
| 0.3937 | 14.0 | 210 | 1.7519 | 1.3244 | 1.3244 | 1.0540 | 1.0540 | -1.6197 | -1.6197 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.4128 | 15.0 | 225 | 1.8588 | 1.3643 | 1.3643 | 1.0765 | 1.0765 | -1.7797 | -1.7797 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.3424 | 16.0 | 240 | 1.7211 | 1.3128 | 1.3128 | 1.0217 | 1.0217 | -1.5737 | -1.5737 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.3307 | 17.0 | 255 | 1.7802 | 1.3351 | 1.3351 | 1.0790 | 1.0790 | -1.6621 | -1.6621 | -0.3043 | 0.0 | 0.5 | 0.3205 | nan |
| 0.2972 | 18.0 | 270 | 1.5272 | 1.2366 | 1.2366 | 0.9945 | 0.9945 | -1.2837 | -1.2837 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2862 | 19.0 | 285 | 1.7213 | 1.3128 | 1.3128 | 1.0574 | 1.0574 | -1.5740 | -1.5740 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2844 | 20.0 | 300 | 1.8999 | 1.3793 | 1.3793 | 1.0930 | 1.0930 | -1.8411 | -1.8411 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2404 | 21.0 | 315 | 1.9806 | 1.4082 | 1.4082 | 1.1221 | 1.1221 | -1.9617 | -1.9617 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2349 | 22.0 | 330 | 1.8649 | 1.3665 | 1.3665 | 1.0953 | 1.0953 | -1.7888 | -1.7888 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2323 | 23.0 | 345 | 1.8256 | 1.3520 | 1.3520 | 1.0694 | 1.0694 | -1.7299 | -1.7299 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.2217 | 24.0 | 360 | 1.9150 | 1.3847 | 1.3847 | 1.1017 | 1.1017 | -1.8636 | -1.8636 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2262 | 25.0 | 375 | 1.8536 | 1.3624 | 1.3624 | 1.0667 | 1.0667 | -1.7719 | -1.7719 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2052 | 26.0 | 390 | 1.7727 | 1.3323 | 1.3323 | 1.0475 | 1.0475 | -1.6508 | -1.6508 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2121 | 27.0 | 405 | 1.8088 | 1.3458 | 1.3458 | 1.0588 | 1.0588 | -1.7048 | -1.7048 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.1723 | 28.0 | 420 | 1.8283 | 1.3530 | 1.3530 | 1.0628 | 1.0628 | -1.7340 | -1.7340 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.1932 | 29.0 | 435 | 1.8566 | 1.3635 | 1.3635 | 1.0763 | 1.0763 | -1.7764 | -1.7764 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2157 | 30.0 | 450 | 1.8639 | 1.3661 | 1.3661 | 1.0795 | 1.0795 | -1.7872 | -1.7872 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| 8349ae4a6788c1ad58dfdf084975abfa |
infinitejoy/wav2vec2-large-xls-r-300m-armenian | infinitejoy | wav2vec2 | 18 | 10 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['hy-AM'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 1,718 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-armenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9669
- Wer: 0.6942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.7294 | 27.78 | 500 | 0.8540 | 0.9944 |
| 0.8863 | 55.56 | 1000 | 0.7282 | 0.7312 |
| 0.5789 | 83.33 | 1500 | 0.8178 | 0.8102 |
| 0.3899 | 111.11 | 2000 | 0.8034 | 0.7701 |
| 0.2869 | 138.89 | 2500 | 0.9061 | 0.6999 |
| 0.1934 | 166.67 | 3000 | 0.9400 | 0.7105 |
| 0.1551 | 194.44 | 3500 | 0.9667 | 0.6955 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| b4ed021269220edbefd3c5fbf184569d |
nubby/ozadomi-artist | nubby | null | 3 | 0 | null | 1 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,258 | false |
waifu diffusion 1.3 base model with dreambooth training on images drawn by the artist "ozadomi"
Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
Use "m_ozdartist" to activate
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 6a977fef77758990100b6f68def9ed04 |
sd-dreambooth-library/kiril | sd-dreambooth-library | null | 22 | 4 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,222 | false | ### kiril on Stable Diffusion via Dreambooth
#### model by Eddiefloat
This your the Stable Diffusion model fine-tuned the kiril concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **kiril**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
![image 0](https://huggingface.co/sd-dreambooth-library/kiril/resolve/main/concept_images/1.jpeg)
![image 1](https://huggingface.co/sd-dreambooth-library/kiril/resolve/main/concept_images/2.jpeg)
![image 2](https://huggingface.co/sd-dreambooth-library/kiril/resolve/main/concept_images/3.jpeg)
![image 3](https://huggingface.co/sd-dreambooth-library/kiril/resolve/main/concept_images/0.jpeg)
| f82c1b7b25e76ceabac75845df648f69 |
tyoyo/dreambooth-v2-1-512-deluha-0 | tyoyo | null | 27 | 5 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,570 | false | ### dreambooth-v2-1-512-deluha-0 Dreambooth model trained by tyoyo with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
deluha (use that on your prompt)
![deluha 0](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%281%29.jpg)![deluha 1](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%282%29.jpg)![deluha 2](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%283%29.jpg)![deluha 3](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%284%29.jpg)![deluha 4](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%285%29.jpg)![deluha 5](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%286%29.jpg)![deluha 6](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%287%29.jpg)![deluha 7](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%288%29.jpg)![deluha 8](https://huggingface.co/tyoyo/dreambooth-v2-1-512-deluha-0/resolve/main/concept_images/deluha_%289%29.jpg)
| fe3f80caa7a759ef6b33bf2b51cf5233 |
arun-shankar/GPT-2-covid-news-articles | arun-shankar | gpt2 | 11 | 29 | transformers | 1 | text-generation | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['medical'] | false | true | true | 577 | false | Dataset: https://www.kaggle.com/datasets/timmayer/covid-news-articles-2020-2022
Comprehensive guide can be found here: https://medium.com/@shankar.arunp/easily-build-your-own-gpt-from-scratch-using-aws-51811b6355d3
The model is GPT2 further pre-trained on the news articles to incorporate COVID-19 related context to the model.
Similar article on how to further pre-train a BERT base model from scratch using the articles can be found here: https://medium.com/@shankar.arunp/training-bert-from-scratch-on-your-custom-domain-data-a-step-by-step-guide-with-amazon-25fcbee4316a | f71123b4185833ba142638d877e805a2 |
shields/wav2vec2-base-20sec-timit-and-dementiabank | shields | wav2vec2 | 25 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,652 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-20sec-timit-and-dementiabank
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4338
- Wer: 0.2313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6839 | 2.53 | 500 | 2.7287 | 1.0 |
| 0.8708 | 5.05 | 1000 | 0.5004 | 0.3490 |
| 0.2879 | 7.58 | 1500 | 0.4411 | 0.2872 |
| 0.1877 | 10.1 | 2000 | 0.4359 | 0.2594 |
| 0.1617 | 12.63 | 2500 | 0.4404 | 0.2492 |
| 0.1295 | 15.15 | 3000 | 0.4356 | 0.2418 |
| 0.1146 | 17.68 | 3500 | 0.4338 | 0.2313 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| 0b20c79ff46f8218f9e7b90ecd3b1f44 |
KoichiYasuoka/roberta-base-serbian | KoichiYasuoka | roberta | 8 | 5 | transformers | 1 | fill-mask | true | false | false | cc-by-sa-4.0 | ['sr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['serbian', 'masked-lm'] | false | true | true | 588 | false |
# roberta-base-serbian
## Model Description
This is a RoBERTa model in Serbian (Cyrillic and Latin) pre-trained on [srWaC](http://hdl.handle.net/11356/1063). You can fine-tune `roberta-base-serbian` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-serbian-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-serbian")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-serbian")
```
| a58082a6ba179ef3446f68a7a777e48b |