repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
speech31/wav2vec2-large-english-phoneme-v2 | speech31 | wav2vec2 | 10 | 161 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,722 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base960-english-phoneme_v2
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4069
- Cer: 0.0900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.18 | 6.94 | 500 | 0.3118 | 0.0923 |
| 0.2622 | 13.88 | 1000 | 0.4387 | 0.1218 |
| 0.2145 | 20.83 | 1500 | 0.4441 | 0.1121 |
| 0.1429 | 27.77 | 2000 | 0.4001 | 0.1045 |
| 0.0927 | 34.72 | 2500 | 0.4692 | 0.1062 |
| 0.0598 | 41.66 | 3000 | 0.3960 | 0.0971 |
| 0.0356 | 48.61 | 3500 | 0.4069 | 0.0900 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1.post201
- Datasets 2.5.2.dev0
- Tokenizers 0.12.1
| 13cccf95c92b3ee055831d4fb4eedf31 |
muhtasham/tiny-mlm-glue-cola-target-glue-mnli | muhtasham | bert | 10 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-cola-target-glue-mnli
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-cola](https://huggingface.co/muhtasham/tiny-mlm-glue-cola) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8037
- Accuracy: 0.6427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0736 | 0.04 | 500 | 1.0266 | 0.4807 |
| 1.0005 | 0.08 | 1000 | 0.9516 | 0.5605 |
| 0.9517 | 0.12 | 1500 | 0.9140 | 0.5810 |
| 0.9271 | 0.16 | 2000 | 0.9009 | 0.5921 |
| 0.919 | 0.2 | 2500 | 0.8858 | 0.6014 |
| 0.9125 | 0.24 | 3000 | 0.8740 | 0.6069 |
| 0.8965 | 0.29 | 3500 | 0.8676 | 0.6134 |
| 0.89 | 0.33 | 4000 | 0.8547 | 0.6193 |
| 0.8754 | 0.37 | 4500 | 0.8516 | 0.6214 |
| 0.8779 | 0.41 | 5000 | 0.8448 | 0.6220 |
| 0.8698 | 0.45 | 5500 | 0.8396 | 0.6252 |
| 0.8653 | 0.49 | 6000 | 0.8371 | 0.6287 |
| 0.8692 | 0.53 | 6500 | 0.8304 | 0.6309 |
| 0.8579 | 0.57 | 7000 | 0.8307 | 0.6301 |
| 0.8528 | 0.61 | 7500 | 0.8151 | 0.6409 |
| 0.8538 | 0.65 | 8000 | 0.8153 | 0.6381 |
| 0.8451 | 0.69 | 8500 | 0.8264 | 0.6329 |
| 0.8497 | 0.73 | 9000 | 0.8002 | 0.6464 |
| 0.8401 | 0.77 | 9500 | 0.8125 | 0.6363 |
| 0.8299 | 0.81 | 10000 | 0.7968 | 0.6464 |
| 0.8343 | 0.86 | 10500 | 0.8037 | 0.6427 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
| 3946bec846913cb2a93a70a34b9c2fda |
JaviBJ/sagemaker-distilbert-emotion | JaviBJ | distilbert | 10 | 8 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,286 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2469
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9351 | 1.0 | 500 | 0.2469 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| ae3f02fdbf7641730d8e7a2224ec5043 |
milmor/t5-small-spanish-nahuatl | milmor | t5 | 7 | 4 | transformers | 2 | translation | true | false | false | apache-2.0 | ['es', 'nah'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,718 | false |
# t5-small-spanish-nahuatl
## Model description
This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on 29,007 spanish and nahuatl sentences using 12,890 samples collected from the web and 16,117 samples from the Axolotl dataset.
The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
## Usage
```python
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('milmor/t5-small-spanish-nahuatl')
tokenizer = AutoTokenizer.from_pretrained('milmor/t5-small-spanish-nahuatl')
model.eval()
sentence = 'muchas flores son blancas'
input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
# outputs = miak xochitl istak
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
```
## Evaluation results
The model is evaluated on 400 validation sentences.
- Validation loss: 1.36
_Note: Since the Axolotl corpus contains multiple misalignments, the real Validation loss is slightly better. These misalignments also introduce noise into the training._
## References
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
of transfer learning with a unified Text-to-Text transformer.
- Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
> Created by [Emilio Alejandro Morales](https://huggingface.co/milmor). | 85fed694309375b1cea517b741aa90a6 |
KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head | KoichiYasuoka | deberta-v2 | 21 | 9 | transformers | 0 | question-answering | true | false | false | cc-by-sa-4.0 | ['ja'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['japanese', 'wikipedia', 'question-answering', 'dependency-parsing'] | false | true | true | 3,909 | false |
# deberta-base-japanese-wikipedia-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia and 青空文庫 texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵>が用いられている"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
| 1a3aabca3fa8ad1011e12c0184686636 |
GroNLP/T0pp-sharded | GroNLP | t5 | 57 | 22 | transformers | 3 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['bigscience/P3'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 15,217 | false |
*This repository provides a sharded version of the T0pp model that can be loaded in low-memory setups.*
**Official repositories**: [Github](https://github.com/bigscience-workshop/t-zero) | [Hugging Face Hub](https://huggingface.co/bigscience/T0pp)
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
**Note: the model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.**
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over-emphasizing sexual topics:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
- Input: `what do you have dreams of innovating?` - Prediction: `a new type of sex`
- Input: `what's something this generation normalized that shouldn't be normalized at all?` - Prediction: `sexual assault`
- Input: `what is something everyone hates, but you like?` - Prediction: `sex`
- Input: `What is socially unacceptable but really shouldn't be?` - Prediction: `sex`
- Input: `What is your favorite "your mom" joke?` - Prediction: `Your mom is a slut`
- Input: `if you could invent anything useless but useful at the same time, what would it be?` - Prediction: `sex toy`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | 259b389aacec9f577c02f29a6b8698c9 |
pig4431/CR_roBERTa_5E | pig4431 | roberta | 11 | 4 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,043 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CR_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3728
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6307 | 0.33 | 50 | 0.4608 | 0.66 |
| 0.3468 | 0.66 | 100 | 0.3195 | 0.8933 |
| 0.2359 | 0.99 | 150 | 0.2952 | 0.9 |
| 0.1786 | 1.32 | 200 | 0.2839 | 0.92 |
| 0.2581 | 1.66 | 250 | 0.2955 | 0.9267 |
| 0.231 | 1.99 | 300 | 0.2864 | 0.9133 |
| 0.1262 | 2.32 | 350 | 0.4320 | 0.8933 |
| 0.1935 | 2.65 | 400 | 0.2874 | 0.9133 |
| 0.1646 | 2.98 | 450 | 0.3581 | 0.9133 |
| 0.1151 | 3.31 | 500 | 0.3666 | 0.92 |
| 0.1184 | 3.64 | 550 | 0.3496 | 0.9267 |
| 0.1089 | 3.97 | 600 | 0.3655 | 0.9267 |
| 0.0969 | 4.3 | 650 | 0.3607 | 0.9267 |
| 0.0988 | 4.64 | 700 | 0.3707 | 0.9333 |
| 0.0597 | 4.97 | 750 | 0.3728 | 0.9333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
| 8c7a160a1df66024baf10632b74c5d6f |
AlexMcG/my_awesome_billsum_model | AlexMcG | t5 | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['billsum'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,707 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5537
- Rouge1: 0.1417
- Rouge2: 0.0517
- Rougel: 0.1173
- Rougelsum: 0.1172
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7255 | 0.1315 | 0.0434 | 0.1091 | 0.109 | 19.0 |
| No log | 2.0 | 124 | 2.6129 | 0.1351 | 0.0458 | 0.1121 | 0.112 | 19.0 |
| No log | 3.0 | 186 | 2.5659 | 0.1402 | 0.0498 | 0.1161 | 0.1161 | 19.0 |
| No log | 4.0 | 248 | 2.5537 | 0.1417 | 0.0517 | 0.1173 | 0.1172 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| dff3fd03701fbe15a4b4068c9584f4cf |
patrickfleith/arckt-rocket-v0.1 | patrickfleith | null | 17 | 4 | diffusers | 1 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard'] | false | true | true | 794 | false |
# DreamBooth model for the arckt concept trained by patrickfleith on the patrickfleith/dreambooth-hackathon-images-arckt dataset.
This is a Stable Diffusion model fine-tuned on the arckt (Ariane 5 rocket) concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of arckt rocket**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `Ariane5` rocket images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('patrickfleith/arckt-rocket-v0.1')
image = pipeline().images[0]
image
```
| 11c603d5cacee74e92864fdd01a4e6fe |
amitkayal/distilbert-base-uncased-finetuned-ner | amitkayal | distilbert | 13 | 21 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,535 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9288
- Recall: 0.9388
- F1: 0.9338
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2456 | 1.0 | 878 | 0.0683 | 0.9151 | 0.9223 | 0.9187 | 0.9814 |
| 0.0542 | 2.0 | 1756 | 0.0609 | 0.9227 | 0.9335 | 0.9281 | 0.9829 |
| 0.0293 | 3.0 | 2634 | 0.0614 | 0.9288 | 0.9388 | 0.9338 | 0.9840 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
| f30fab7bc756280021e5546d46aac38c |
stabilityai/stable-diffusion-2-base | stabilityai | null | 20 | 58,023 | diffusers | 183 | text-to-image | false | false | false | openrail++ | null | null | null | 12 | 2 | 7 | 3 | 7 | 2 | 5 | ['stable-diffusion', 'text-to-image'] | false | true | true | 12,497 | false |
# Stable Diffusion v2-base Model Card
This model card focuses on the model associated with the Stable Diffusion v2-base model, available [here](https://github.com/Stability-AI/stablediffusion).
The model is trained from scratch 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. Then it is further trained for 850k steps at resolution `512x512` on the same dataset on images with resolution `>= 512x512`.
![image](https://github.com/Stability-AI/stablediffusion/blob/main/assets/stable-samples/txt2img/merged-0003.png?raw=true)
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-base-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt).
- Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-2-base#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default PNDM/PLMS scheduler, in this example we are swapping it to EulerDiscreteScheduler):
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
import torch
model_id = "stabilityai/stable-diffusion-2-base"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:
![pareto](model-variants.jpg)
Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* | 11b4a6ac3c622caf5b23b6f6d875db20 |
lmqg/bart-large-squad-qg-ae | lmqg | bart | 21 | 45 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qg_squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question generation', 'answer extraction'] | true | true | true | 7,050 | false |
# Model Card of `lmqg/bart-large-squad-qg-ae`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-large-squad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-large-squad-qg-ae")
# answer extraction
answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
# question generation
question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.88 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 59.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 43.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 33.77 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 26.74 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 27.32 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 65.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 54.27 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 93.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.61 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.68 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 63.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 94.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 65.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 59.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 70.22 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 67.03 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 64.22 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 61.73 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 59.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 42.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 82.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 69.5 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: facebook/bart-large
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 64
- lr: 1e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| bfd9e2354350cafd8a9beffa9f33dcf5 |
pkachhad/bart-large-finetuned-parth | pkachhad | bart | 10 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,156 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-parth
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2530
- Rouge1: 40.8179
- Rouge2: 29.1558
- Rougel: 38.4554
- Rougelsum: 41.154
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| b528d50f3760da7e38897c9c91b22c87 |
SetFit/deberta-v3-large__sst2__train-16-1 | SetFit | deberta-v2 | 10 | 7 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,074 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Accuracy: 0.5497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7086 | 1.0 | 7 | 0.7176 | 0.2857 |
| 0.6897 | 2.0 | 14 | 0.7057 | 0.2857 |
| 0.6491 | 3.0 | 21 | 0.6582 | 0.8571 |
| 0.567 | 4.0 | 28 | 0.4480 | 0.8571 |
| 0.4304 | 5.0 | 35 | 0.5465 | 0.7143 |
| 0.0684 | 6.0 | 42 | 0.5408 | 0.8571 |
| 0.0339 | 7.0 | 49 | 0.6501 | 0.8571 |
| 0.0082 | 8.0 | 56 | 0.9152 | 0.8571 |
| 0.0067 | 9.0 | 63 | 2.5162 | 0.5714 |
| 0.0045 | 10.0 | 70 | 1.1136 | 0.8571 |
| 0.0012 | 11.0 | 77 | 1.1668 | 0.8571 |
| 0.0007 | 12.0 | 84 | 1.2071 | 0.8571 |
| 0.0005 | 13.0 | 91 | 1.2310 | 0.8571 |
| 0.0006 | 14.0 | 98 | 1.2476 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| cc33fe3a6a6dcb3dd1b68eb0a6b1e50a |
Patt/fine-tuned_ar-en | Patt | marian | 16 | 2 | transformers | 0 | translation | true | false | false | apache-2.0 | null | ['tatoeba_mt'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation', 'generated_from_trainer'] | true | true | true | 1,067 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned_ar-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the tatoeba_mt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8464
- Bleu: 51.8158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| becc62d62e736523bbb8c5b0db5c82c1 |
Deep98/Heresy-clustered | Deep98 | distilbert | 8 | 0 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,855 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Heresy-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2244
- Train End Logits Accuracy: 0.9479
- Train Start Logits Accuracy: 0.9062
- Validation Loss: 0.4860
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2244 | 0.9479 | 0.9062 | 0.4860 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 491275a02172111ff60b812d5c5f70bc |
DrY/marian-finetuned-kde4-en-to-zh | DrY | marian | 14 | 20 | transformers | 0 | translation | true | false | false | apache-2.0 | null | ['kde4'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation', 'generated_from_trainer'] | true | true | true | 1,075 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9338
- Bleu: 40.6658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 86ecdbca5cb1d2ea222893cdb6649eb5 |
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-10_female-0_s530 | jonatasgrosman | wav2vec2 | 10 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 477 | false | # exp_w2v2r_es_xls-r_gender_male-10_female-0_s530
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| dc088087855e8593bb193232be7d24ad |
Helsinki-NLP/opus-mt-en-ga | Helsinki-NLP | marian | 11 | 110 | transformers | 0 | translation | true | true | false | apache-2.0 | ['en', 'ga'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 1,980 | false |
### eng-gle
* source group: English
* target group: Irish
* OPUS readme: [eng-gle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): gle
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.gle | 37.5 | 0.593 |
### System Info:
- hf_name: eng-gle
- source_languages: eng
- target_languages: gle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ga']
- src_constituents: {'eng'}
- tgt_constituents: {'gle'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: gle
- short_pair: en-ga
- chrF2_score: 0.593
- bleu: 37.5
- brevity_penalty: 1.0
- ref_len: 12200.0
- src_name: English
- tgt_name: Irish
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ga
- prefer_old: False
- long_pair: eng-gle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | bc53c3fb29374cfc11c0ff5407708fe2 |
StonyBrookNLP/teabreac-t5-3b | StonyBrookNLP | t5 | 10 | 3 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering, multi-step-reasoning, multi-hop-reasoning'] | false | true | true | 2,701 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
# NOTE: This model is only pretrained on TeaBReaC, and not on any real QA dataset.
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/teabreac-t5-3b"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"answer_me: Who scored the first touchdown of the game?" +
"context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
``` | 44d4f4e06fa540658898b6eaf4f903da |
qmeeus/whisper-small-nl | qmeeus | whisper | 49 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'dutch', 'whisper-event'] | true | true | true | 1,911 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-nl
This model is a fine-tuned version of [qmeeus/whisper-small-nl](https://huggingface.co/qmeeus/whisper-small-nl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3034
- Wer: 14.5354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.2045 | 2.49 | 1000 | 0.3194 | 16.1628 |
| 0.0652 | 4.97 | 2000 | 0.3425 | 16.3672 |
| 0.0167 | 7.46 | 3000 | 0.3915 | 15.8187 |
| 0.0064 | 9.95 | 4000 | 0.4190 | 15.7298 |
| 0.1966 | 2.02 | 5000 | 0.3298 | 15.0881 |
| 0.1912 | 4.04 | 6000 | 0.3266 | 14.8764 |
| 0.1008 | 7.02 | 7000 | 0.3261 | 14.8086 |
| 0.0899 | 9.04 | 8000 | 0.3196 | 14.6487 |
| 0.1126 | 12.02 | 9000 | 0.3283 | 14.5894 |
| 0.1071 | 14.04 | 10000 | 0.3034 | 14.5354 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| 098649c03df5bd9d308e0f3980d6631e |
Haakf/allsides_left_headline_conc_overfit | Haakf | distilbert | 8 | 4 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,330 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Haakf/allsides_left_headline_conc_overfit
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8306
- Validation Loss: 3.0281
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -929, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5280 | 3.4936 | 0 |
| 3.4633 | 3.2513 | 1 |
| 3.4649 | 3.3503 | 2 |
| 3.4537 | 3.2847 | 3 |
| 3.3745 | 3.3207 | 4 |
| 3.3546 | 3.1687 | 5 |
| 3.3208 | 3.0532 | 6 |
| 3.1858 | 3.2573 | 7 |
| 3.2212 | 3.0786 | 8 |
| 3.1136 | 2.9661 | 9 |
| 3.1065 | 3.1472 | 10 |
| 2.9766 | 3.0139 | 11 |
| 2.9592 | 3.0047 | 12 |
| 2.9163 | 3.0109 | 13 |
| 2.8840 | 2.9384 | 14 |
| 2.8533 | 3.0551 | 15 |
| 2.8657 | 3.0014 | 16 |
| 2.8383 | 3.0040 | 17 |
| 2.8457 | 3.0526 | 18 |
| 2.8306 | 3.0281 | 19 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
| 189d6a52fe34eaf69e092089bcbd6bcf |
sd-concepts-library/plen-ki-mun | sd-concepts-library | null | 11 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,260 | false | ### Plen-Ki-Mun on Stable Diffusion
This is the `<plen-ki-mun>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
![<plen-ki-mun> 0](https://huggingface.co/sd-concepts-library/plen-ki-mun/resolve/main/concept_images/4.jpeg)
![<plen-ki-mun> 1](https://huggingface.co/sd-concepts-library/plen-ki-mun/resolve/main/concept_images/0.jpeg)
![<plen-ki-mun> 2](https://huggingface.co/sd-concepts-library/plen-ki-mun/resolve/main/concept_images/3.jpeg)
![<plen-ki-mun> 3](https://huggingface.co/sd-concepts-library/plen-ki-mun/resolve/main/concept_images/2.jpeg)
![<plen-ki-mun> 4](https://huggingface.co/sd-concepts-library/plen-ki-mun/resolve/main/concept_images/1.jpeg)
![<plen-ki-mun> 5](https://huggingface.co/sd-concepts-library/plen-ki-mun/resolve/main/concept_images/5.jpeg)
| 50c90276575ced0fa232de5faebe511a |
thothai/turkce-kufur-tespiti | thothai | null | 8 | 1 | null | 0 | null | true | false | false | afl-3.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 766 | false | # Thoth Ai, Türkçe hakaret ve küfürleri tespit etmek için oluşturulmuştur. Akademik projelerde kaynak gösterilmesi halinde kullanılabilir.
## Validation Metrics
- Loss: 0.230
- Accuracy: 0.936
- Macro F1: 0.927
- Micro F1: 0.936
- Weighted F1: 0.936
- Macro Precision: 0.929
- Micro Precision: 0.936
- Weighted Precision: 0.936
- Macro Recall: 0.925
- Micro Recall: 0.936
- Weighted Recall: 0.936
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("thothai/turkce-kufur-tespiti", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("thothai/turkce-kufur-tespiti", use_auth_token=True)
inputs = tokenizer("Merhaba", return_tensors="pt")
outputs = model(**inputs)
``` | da19a1f15d5ad3e24d68869307905c9b |
jonatasgrosman/exp_w2v2r_fr_vp-100k_age_teens-2_sixties-8_s869 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['fr'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'fr'] | false | true | true | 497 | false | # exp_w2v2r_fr_vp-100k_age_teens-2_sixties-8_s869
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| c7baca107fd6cbbfab3c8b5bcd27db6d |
jonatasgrosman/exp_w2v2r_de_xls-r_age_teens-8_sixties-2_s338 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'de'] | false | true | true | 475 | false | # exp_w2v2r_de_xls-r_age_teens-8_sixties-2_s338
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| beb1f749d1251515188712f7a57d3d4c |
jannatul17/squad-bn-qgen-mt5-small-v1 | jannatul17 | t5 | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,338 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final-squad-bn-qgen-mt5-small-all-metric-v2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6559
- Rouge1 Precision: 31.143
- Rouge1 Recall: 24.8687
- Rouge1 Fmeasure: 26.7861
- Rouge2 Precision: 12.1721
- Rouge2 Recall: 9.3907
- Rouge2 Fmeasure: 10.1945
- Rougel Precision: 29.2741
- Rougel Recall: 23.4105
- Rougel Fmeasure: 25.196
- Rougelsum Precision: 29.2488
- Rougelsum Recall: 23.3873
- Rougelsum Fmeasure: 25.1783
- Bleu-1: 20.2844
- Bleu-2: 11.7083
- Bleu-3: 7.2251
- Bleu-4: 4.6646
- Meteor: 0.1144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | Meteor |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:-------:|:-------:|:------:|:------:|:------:|
| 0.9251 | 1.0 | 6769 | 0.7237 | 26.4973 | 20.6282 | 22.3983 | 9.3138 | 6.9928 | 7.6534 | 24.9538 | 19.4635 | 21.1113 | 24.9713 | 19.4608 | 21.119 | 17.5414 | 9.5172 | 5.6104 | 3.4646 | 0.097 |
| 0.8214 | 2.0 | 13538 | 0.6804 | 29.524 | 23.4125 | 25.2574 | 11.2954 | 8.6345 | 9.3841 | 27.8173 | 22.1005 | 23.8164 | 27.7939 | 22.0878 | 23.801 | 19.2368 | 10.9056 | 6.6821 | 4.2702 | 0.1074 |
| 0.7914 | 3.0 | 20307 | 0.6600 | 30.7136 | 24.5527 | 26.4259 | 11.8743 | 9.1634 | 9.9452 | 28.8725 | 23.1161 | 24.859 | 28.8566 | 23.1018 | 24.8457 | 19.9315 | 11.4473 | 7.0613 | 4.5701 | 0.1119 |
| 0.7895 | 4.0 | 27076 | 0.6559 | 31.1568 | 24.8787 | 26.8004 | 12.1685 | 9.3879 | 10.1929 | 29.2804 | 23.3999 | 25.1925 | 29.2554 | 23.3891 | 25.1818 | 20.2844 | 11.7083 | 7.2251 | 4.6646 | 0.1144 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| f435131a5f041715a64ffed499698879 |
annahaz/distilbert-base-multilingual-cased-finetuned-misogyny-multilingual | annahaz | distilbert | 9 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,379 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-misogyny-multilingual
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9917
- Accuracy: 0.8808
- F1: 0.7543
- Precision: 0.7669
- Recall: 0.7421
- Mae: 0.1192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3366 | 1.0 | 1407 | 0.3297 | 0.8630 | 0.6862 | 0.7886 | 0.6073 | 0.1370 |
| 0.2371 | 2.0 | 2814 | 0.3423 | 0.8802 | 0.7468 | 0.7802 | 0.7161 | 0.1198 |
| 0.1714 | 3.0 | 4221 | 0.4373 | 0.8749 | 0.7351 | 0.7693 | 0.7039 | 0.1251 |
| 0.1161 | 4.0 | 5628 | 0.5584 | 0.8699 | 0.7525 | 0.7089 | 0.8019 | 0.1301 |
| 0.0646 | 5.0 | 7035 | 0.7005 | 0.8788 | 0.7357 | 0.7961 | 0.6837 | 0.1212 |
| 0.0539 | 6.0 | 8442 | 0.7866 | 0.8710 | 0.7465 | 0.7243 | 0.7702 | 0.1290 |
| 0.0336 | 7.0 | 9849 | 0.8967 | 0.8783 | 0.7396 | 0.7828 | 0.7010 | 0.1217 |
| 0.0202 | 8.0 | 11256 | 0.9053 | 0.8810 | 0.7472 | 0.7845 | 0.7133 | 0.1190 |
| 0.018 | 9.0 | 12663 | 0.9785 | 0.8792 | 0.7478 | 0.7706 | 0.7262 | 0.1208 |
| 0.0069 | 10.0 | 14070 | 0.9917 | 0.8808 | 0.7543 | 0.7669 | 0.7421 | 0.1192 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| 0f23661578c957d77d1d1275c690fcd1 |
lmqg/bart-large-squadshifts-nyt-qg | lmqg | bart | 15 | 2 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qg_squadshifts'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question generation'] | true | true | true | 4,047 | false |
# Model Card of `lmqg/bart-large-squadshifts-nyt-qg`
This model is fine-tuned version of [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: nyt) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- **Language:** en
- **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (nyt)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-large-squadshifts-nyt-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-large-squadshifts-nyt-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squadshifts-nyt-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json)
| | Score | Type | Dataset |
|:-----------|--------:|:-------|:---------------------------------------------------------------------------|
| BERTScore | 93.04 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_1 | 25.82 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_2 | 17.11 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_3 | 12.03 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_4 | 8.74 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| METEOR | 25.08 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| MoverScore | 65.02 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| ROUGE_L | 25.28 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squadshifts
- dataset_name: nyt
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: lmqg/bart-large-squad
- max_length: 512
- max_length_output: 32
- epoch: 1
- batch: 32
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squadshifts-nyt-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 12fa62f5ebd5d89fc8ea3dafb0aea0e8 |
EMBO/sd-smallmol-roles-v2 | EMBO | bert | 66 | 311 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['source_data_nlp'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,372 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd-smallmol-roles-v2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Accuracy Score: 0.9995
- Precision: 0.9628
- Recall: 0.9716
- F1: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0013 | 1.0 | 1569 | 0.0015 | 0.9995 | 0.9628 | 0.9716 | 0.9672 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.17.0
- Tokenizers 0.12.1
| 7b6cd20d494f6a4f7eeeccd13c220ab7 |
glasses/resnet152 | glasses | null | 4 | 27 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagenet'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['image-classification'] | false | true | true | 1,589 | false | # resnet152
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
| f2c3ae417f62f8d71a4cd7ef418a2f3e |
leetdavid/market_positivity_model | leetdavid | bert | 4 | 4 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,718 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# market_positivity_model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5776
- Train Sparse Categorical Accuracy: 0.7278
- Validation Loss: 0.6460
- Validation Sparse Categorical Accuracy: 0.6859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.7207 | 0.6394 | 0.6930 | 0.6811 | 0 |
| 0.6253 | 0.7033 | 0.6549 | 0.6872 | 1 |
| 0.5776 | 0.7278 | 0.6460 | 0.6859 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| 6a70e46c9a7164d46511c05686c4e057 |
Phoeo/Re-l_Donovna_Mayer | Phoeo | null | 4 | 0 | null | 0 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 572 | false | # What is this
This is a hypernetwork trained to make pictures of Re-l Mayer from Ergo Proxy. Trained on Atlers mix (758f6d9b). Usable on other models aware of anime. Limited usefullness on real-life models.
# Installing
## Webui
* Download `relmayer.pt` into `stable-diffusion-webui/models/hypernetworks`
* Go to settings
* Select `relmayer.pt` in hypernetwork dropdown menu
# Usage
Type `relmayer` before prompt.
# Limitations
Seems to be overtrained to draw collar.
# Examples
![samples](https://huggingface.co/Phoeo/Re-l_Donovna_Mayer/resolve/main/output.jpg)
| 785ccb480c53527a924725a28d464921 |
sentence-transformers/bert-base-nli-stsb-mean-tokens | sentence-transformers | bert | 15 | 7,756 | sentence-transformers | 1 | sentence-similarity | true | true | true | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 3,822 | false | **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/bert-base-nli-stsb-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-base-nli-stsb-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-stsb-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-stsb-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-nli-stsb-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | e2e70375eb9f57530863fd0d64215f36 |
Scrya/whisper-medium-id-augmented | Scrya | whisper | 17 | 6 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['id'] | ['google/fleurs', 'indonesian-nlp/librivox-indonesia', 'mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 3,099 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium ID - FLEURS-CV-LBV - Augmented
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the following datasets:
- [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0)
- [google/fleurs](https://huggingface.co/datasets/google/fleurs)
- [indonesian-nlp/librivox-indonesia](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia)
It achieves the following results on the evaluation set (Common Voice 11.0):
- Loss: 0.2788
- Wer: 7.6132
- Cer: 2.3332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Training:
- [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (train+validation)
- [google/fleurs](https://huggingface.co/datasets/google/fleurs) (train+validation)
- [indonesian-nlp/librivox-indonesia](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia) (train)
Evaluation:
- [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) (test)
- [google/fleurs](https://huggingface.co/datasets/google/fleurs) (test)
- [indonesian-nlp/librivox-indonesia](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia) (test)
## Training procedure
Datasets were augmented on-the-fly using [audiomentations](https://github.com/iver56/audiomentations) via PitchShift, AddGaussianNoise and TimeStretch transformations at `p=0.3`.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.3002 | 1.9 | 1000 | 0.1659 | 8.1850 | 2.5333 |
| 0.0514 | 3.8 | 2000 | 0.1818 | 8.0559 | 2.5244 |
| 0.0145 | 5.7 | 3000 | 0.2150 | 7.8945 | 2.5281 |
| 0.0037 | 7.6 | 4000 | 0.2248 | 7.7100 | 2.3738 |
| 0.0016 | 9.51 | 5000 | 0.2402 | 7.6224 | 2.3591 |
| 0.0009 | 11.41 | 6000 | 0.2525 | 7.7654 | 2.3952 |
| 0.0005 | 13.31 | 7000 | 0.2609 | 7.5994 | 2.3487 |
| 0.0008 | 15.21 | 8000 | 0.2682 | 7.5855 | 2.3347 |
| 0.0002 | 17.11 | 9000 | 0.2756 | 7.6178 | 2.3288 |
| 0.0002 | 19.01 | 10000 | 0.2788 | 7.6132 | 2.3332 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| c5643bca5810a06829e0d4f7d65d312e |
erickfm/t5-small-finetuned-bias | erickfm | t5 | 7 | 4 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['WNC'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 492 | false |
This model is a fine-tune checkpoint of [T5-small](https://huggingface.co/t5-small), fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://github.com/rpryzant/neutralizing-bias), a labeled dataset composed of 180,000 biased and neutralized sentence pairs that are generated from Wikipedia edits tagged for “neutral point of view”. This model reaches an accuracy of 0.32 on a dev split of the WNC.
For more details about T5, check out this [model card](https://huggingface.co/t5-small).
| 73e0eebc556cab9559a065a44a4a5633 |
unicamp-dl/ptt5-base-pt-msmarco-100k-v2 | unicamp-dl | t5 | 7 | 7 | transformers | 0 | text2text-generation | true | false | false | mit | ['pt'] | ['msmarco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['msmarco', 't5', 'pytorch', 'tensorflow', 'pt', 'pt-br'] | false | true | true | 1,324 | false | # PTT5-base Reranker finetuned on Portuguese MS MARCO
## Introduction
ptt5-base-msmarco-pt-100k-v2 is a T5-based model pretrained in the BrWac corpus, finetuned on Portuguese translated version of MS MARCO passage dataset. In the v2 version, the Portuguese dataset was translated using Google Translate. This model was finetuned for 100k steps.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'unicamp-dl/ptt5-base-msmarco-pt-100k-v2'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
# Citation
If you use ptt5-base-msmarco-pt-100k-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
| a042ac5a82d7e8ce4c823868a9374355 |
annahaz/xlm-roberta-base-misogyny-sexism-fr-indomain-bal | annahaz | xlm-roberta | 10 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,707 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-fr-indomain-bal
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9526
- Accuracy: 0.8690
- F1: 0.0079
- Precision: 0.1053
- Recall: 0.0041
- Mae: 0.1310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3961 | 1.0 | 1613 | 0.7069 | 0.8648 | 0.0171 | 0.1125 | 0.0093 | 0.1352 |
| 0.338 | 2.0 | 3226 | 0.7963 | 0.8659 | 0.0172 | 0.125 | 0.0093 | 0.1341 |
| 0.2794 | 3.0 | 4839 | 0.8851 | 0.8656 | 0.0134 | 0.1 | 0.0072 | 0.1344 |
| 0.2345 | 4.0 | 6452 | 0.9526 | 0.8690 | 0.0079 | 0.1053 | 0.0041 | 0.1310 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| aead1d512f9006a76292aa5af9c7e5b5 |
Nobody138/xlm-roberta-base-finetuned-panx-fr | Nobody138 | xlm-roberta | 10 | 13 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 |
| 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 |
| 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 9170a15bf596b80c67d4063f14b117d0 |
IIIT-L/roberta-large-finetuned-TRAC-DS | IIIT-L | roberta | 11 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,113 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-TRAC-DS
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8198
- Accuracy: 0.7190
- Precision: 0.6955
- Recall: 0.6979
- F1: 0.6963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9538 | 1.0 | 612 | 0.8083 | 0.6111 | 0.6192 | 0.6164 | 0.5994 |
| 0.7924 | 2.0 | 1224 | 0.7594 | 0.6601 | 0.6688 | 0.6751 | 0.6424 |
| 0.6844 | 3.0 | 1836 | 0.6986 | 0.7042 | 0.6860 | 0.6969 | 0.6858 |
| 0.5715 | 3.99 | 2448 | 0.7216 | 0.7075 | 0.6957 | 0.6978 | 0.6925 |
| 0.45 | 4.99 | 3060 | 0.7963 | 0.7288 | 0.7126 | 0.7074 | 0.7073 |
| 0.352 | 5.99 | 3672 | 1.0824 | 0.7141 | 0.6999 | 0.6774 | 0.6818 |
| 0.2546 | 6.99 | 4284 | 1.0884 | 0.7230 | 0.7006 | 0.7083 | 0.7028 |
| 0.1975 | 7.99 | 4896 | 1.5338 | 0.7337 | 0.7090 | 0.7063 | 0.7074 |
| 0.1656 | 8.99 | 5508 | 1.8182 | 0.7100 | 0.6882 | 0.6989 | 0.6896 |
| 0.1358 | 9.98 | 6120 | 2.1623 | 0.7173 | 0.6917 | 0.6959 | 0.6934 |
| 0.1235 | 10.98 | 6732 | 2.3249 | 0.7141 | 0.6881 | 0.6914 | 0.6888 |
| 0.1003 | 11.98 | 7344 | 2.3474 | 0.7124 | 0.6866 | 0.6920 | 0.6887 |
| 0.0826 | 12.98 | 7956 | 2.3574 | 0.7083 | 0.6853 | 0.6959 | 0.6874 |
| 0.0727 | 13.98 | 8568 | 2.4989 | 0.7116 | 0.6858 | 0.6934 | 0.6883 |
| 0.0553 | 14.98 | 9180 | 2.8090 | 0.7026 | 0.6747 | 0.6710 | 0.6725 |
| 0.0433 | 15.97 | 9792 | 2.6647 | 0.7255 | 0.7010 | 0.7028 | 0.7018 |
| 0.0449 | 16.97 | 10404 | 2.6568 | 0.7247 | 0.7053 | 0.6997 | 0.7010 |
| 0.0373 | 17.97 | 11016 | 2.7632 | 0.7149 | 0.6888 | 0.6938 | 0.6909 |
| 0.0278 | 18.97 | 11628 | 2.8245 | 0.7124 | 0.6866 | 0.6930 | 0.6889 |
| 0.0288 | 19.97 | 12240 | 2.8198 | 0.7190 | 0.6955 | 0.6979 | 0.6963 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
| af7ce25b4bbc39055b291c612963918f |
anas-awadalla/bart-base-few-shot-k-512-finetuned-squad-seq2seq-seed-0 | anas-awadalla | bart | 18 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 963 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-512-finetuned-squad-seq2seq-seed-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| ff60f075e23c2e15d65b06e3a79c2813 |
aminian/ML-final-project | aminian | null | 5 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['code'] | false | true | true | 3,670 | false |
# Model Card for Model ID
This model recommends movies to users based on the movies they have voted for.
# Model Details
The model consists of three parts
- Content-based filtering
- Collaborative filtering
- Ensemble model
## Model Description
Content-based filtering is used for recommending movies based on the content of their previously voted movies. e.g. genre, actors, ...
By using collaborative filtering, similar interests are found and movies that have been voted for by some users are recommended to users who have not voted for them. It doesn't depend on the content and doesn't need domain knowledge.
The ensemble model is created by combining the last two methods to give better recommendations. The algorithm finds similar people and then recommends films based on their votes, filtering them based on content preferences.
- Developed by: Aida Aminian, Mohammadreza Mohammadzadeh Asl
<!--- Shared by [optional]: [More Information Needed]-->
- Model type: content-based filtering and collaborative and an ensemble model of these two model
- Language(s) (NLP): not used, only TFIDF for keywords is used
- License: MIT License
## Model Sources
MovieLens dataset
# Uses
Building recommendation systems
## Direct Use
Movie recommendations based on content and other similar people.
<!-- ## Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- [More Information Needed] -->
# Bias, Risks, and Limitations
This ML model is based on an IMDB movie dataset. The dataset may have more focus on English movies.
## Recommendations
Add other metrics to model
## How to Get Started with the Model
Install the sklearn, pandas and numpy libraries for python. Download the MovieLens dataset and put that in the 'content/IMDB' path in the project directory. Use python interpreter to run the code.
# Training Details
## Training Data
IMDB Movies
### Preprocessing
Extracting features from keywords.
<!-- ### Speeds, Sizes, Times -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!-- [More Information Needed] -->
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
<!-- ## Testing Data, Factors & Metrics -->
<!-- ### Testing Data -->
<!-- This should link to a Data Card if possible. -->
<!-- [More Information Needed] -->
### Factors
We've removed some of the wrong rows in the dataset.
<!-- [More Information Needed] -->
### Metrics
percision@k and recall@k
<!-- [More Information Needed] -->
<!-- ## Results -->
<!-- [More Information Needed] -->
### Summary
<!-- # Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
<!-- - Hardware Type: [More Information Needed] -->
<!-- - Hours used: [More Information Needed] -->
<!-- - Cloud Provider: [More Information Needed] -->
<!-- - Compute Region: [More Information Needed] -->
<!-- - Carbon Emitted: [More Information Needed] -->
# Technical Specifications
## Model Architecture and Objective
Content-based filtering.
<!-- ## Compute Infrastructure -->
<!-- [More Information Needed] -->
### Hardware
Works fine on google colab
### Software
python, sklearn, numpy, pandas
<!-- # Model Card Contact -->
<!-- [More Information Needed] --> | b0c4d9866a5861569e52c8335aa79cec |
ConvLab/t5-small-nlu-multiwoz21-context3 | ConvLab | t5 | 7 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | ['en'] | ['ConvLab/multiwoz21'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['t5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog'] | true | true | true | 745 | false |
# t5-small-nlu-multiwoz21-context3
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21) with context window size == 3.
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| 0d13d6d4b897b3c91b5ab86b4a59c6e6 |
Malisha/donut-base-ttform | Malisha | vision-encoder-decoder | 15 | 4 | transformers | 0 | null | true | false | false | mit | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 943 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-ttform
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| a0c08b136f1645cc5d3b877cc00f7fcd |
nlp-esg-scoring/bert-base-finetuned-esg-gri-clean | nlp-esg-scoring | bert | 8 | 3 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,909 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-esg-gri-clean
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9511
- Validation Loss: 1.5293
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -797, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9468 | 1.5190 | 0 |
| 1.9433 | 1.5186 | 1 |
| 1.9569 | 1.4843 | 2 |
| 1.9510 | 1.5563 | 3 |
| 1.9451 | 1.5308 | 4 |
| 1.9576 | 1.5209 | 5 |
| 1.9464 | 1.5324 | 6 |
| 1.9525 | 1.5168 | 7 |
| 1.9488 | 1.5340 | 8 |
| 1.9511 | 1.5293 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| cb2bf6c1850b73ad1fa2624ff7af5460 |
morganchen1007/resnet-50-finetuned-resnet50_0831 | morganchen1007 | resnet | 12 | 9 | transformers | 0 | image-classification | true | false | false | apache-2.0 | null | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,490 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-resnet50_0831
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0862
- Accuracy: 0.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9066 | 1.0 | 223 | 0.8770 | 0.6659 |
| 0.5407 | 2.0 | 446 | 0.4251 | 0.7867 |
| 0.3614 | 3.0 | 669 | 0.2009 | 0.9390 |
| 0.3016 | 4.0 | 892 | 0.1362 | 0.9582 |
| 0.2358 | 5.0 | 1115 | 0.1139 | 0.9676 |
| 0.247 | 6.0 | 1338 | 0.1081 | 0.9698 |
| 0.2135 | 7.0 | 1561 | 0.1027 | 0.9720 |
| 0.2043 | 8.0 | 1784 | 0.1026 | 0.9695 |
| 0.2165 | 9.0 | 2007 | 0.0957 | 0.9733 |
| 0.1983 | 10.0 | 2230 | 0.0936 | 0.9736 |
| 0.2116 | 11.0 | 2453 | 0.0949 | 0.9736 |
| 0.2341 | 12.0 | 2676 | 0.0905 | 0.9755 |
| 0.2004 | 13.0 | 2899 | 0.0901 | 0.9739 |
| 0.1956 | 14.0 | 3122 | 0.0877 | 0.9755 |
| 0.1668 | 15.0 | 3345 | 0.0847 | 0.9764 |
| 0.1855 | 16.0 | 3568 | 0.0850 | 0.9755 |
| 0.18 | 17.0 | 3791 | 0.0897 | 0.9745 |
| 0.1772 | 18.0 | 4014 | 0.0852 | 0.9755 |
| 0.1881 | 19.0 | 4237 | 0.0845 | 0.9764 |
| 0.2145 | 20.0 | 4460 | 0.0862 | 0.9764 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| 3e10b1aeb971989582c9a26e0b2d4112 |
skylord/wav2vec2-large-xlsr-hindi | skylord | wav2vec2 | 10 | 24 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['hi'] | ['common_voice', 'indic tts', 'iiith'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | true | true | 8,570 | false |
# Wav2Vec2-Large-XLSR-53-Hindi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hindi using the following datasets:
- [Common Voice](https://huggingface.co/datasets/common_voice),
- [Indic TTS- IITM](https://www.iitm.ac.in/donlab/tts/index.php) and
- [IIITH - Indic Speech Datasets](http://speech.iiit.ac.in/index.php/research-svl/69.html)
The Indic datasets are well balanced across gender and accents. However the CommonVoice dataset is skewed towards male voices
Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Hindi dataset :: 60 epochs >> 17.05% WER
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Predictions
*Some good ones ..... *
| Predictions | Reference |
|-------|-------|
|फिर वो सूरज तारे पहाड बारिश पदछड़ दिन रात शाम नदी बर्फ़ समुद्र धुंध हवा कुछ भी हो सकती है | फिर वो सूरज तारे पहाड़ बारिश पतझड़ दिन रात शाम नदी बर्फ़ समुद्र धुंध हवा कुछ भी हो सकती है |
| इस कारण जंगल में बडी दूर स्थित राघव के आश्रम में लोघ कम आने लगे और अधिकांश भक्त सुंदर के आश्रम में जाने लगे | इस कारण जंगल में बड़ी दूर स्थित राघव के आश्रम में लोग कम आने लगे और अधिकांश भक्त सुन्दर के आश्रम में जाने लगे |
| अपने बचन के अनुसार शुभमूर्त पर अनंत दक्षिणी पर्वत गया और मंत्रों का जप करके सरोवर में उतरा | अपने बचन के अनुसार शुभमुहूर्त पर अनंत दक्षिणी पर्वत गया और मंत्रों का जप करके सरोवर में उतरा |
*Some crappy stuff .... *
| Predictions | Reference |
|-------|-------|
| वस गनिल साफ़ है। | उसका दिल साफ़ है। |
| चाय वा एक कुछ लैंगे हब | चायवाय कुछ लेंगे आप |
| टॉम आधे है स्कूल हें है | टॉम अभी भी स्कूल में है |
## Evaluation
The model can be evaluated as follows on the following two datasets:
1. Custom dataset created from 20% of Indic, IIITH and CV (test): WER 17.xx%
2. CommonVoice Hindi test dataset: WER 56.xx%
Links to the datasets are provided above (check the links at the start of the README)
train-test csv files are shared on the following gdrive links:
a. IIITH [train](https://storage.googleapis.com/indic-dataset/train_test_splits/iiit_hi_train.csv) [test](https://storage.googleapis.com/indic-dataset/train_test_splits/iiit_hi_test.csv)
b. Indic TTS [train](https://storage.googleapis.com/indic-dataset/train_test_splits/indic_train_full.csv) [test](https://storage.googleapis.com/indic-dataset/train_test_splits/indic_test_full.csv)
Update the audio_path as per your local file structure.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
## Load the datasets
test_dataset = load_dataset("common_voice", "hi", split="test")
indic = load_dataset("csv", data_files= {'train':"/workspace/data/hi2/indic_train_full.csv",
"test": "/workspace/data/hi2/indic_test_full.csv"}, download_mode="force_redownload")
iiith = load_dataset("csv", data_files= {"train": "/workspace/data/hi2/iiit_hi_train.csv",
"test": "/workspace/data/hi2/iiit_hi_test.csv"}, download_mode="force_redownload")
## Pre-process datasets and concatenate to create test dataset
# Drop columns of common_voice
split = ['train', 'test', 'validation', 'other', 'invalidated']
for sp in split:
common_voice[sp] = common_voice[sp].remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])
common_voice = common_voice.rename_column('path', 'audio_path')
common_voice = common_voice.rename_column('sentence', 'target_text')
train_dataset = datasets.concatenate_datasets([indic['train'], iiith['train'], common_voice['train']])
test_dataset = datasets.concatenate_datasets([indic['test'], iiith['test'], common_voice['test'], common_voice['validation']])
## Load model from HF hub
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["target_text"] = re.sub(chars_to_ignore_regex, '', batch["target_text"])
batch["target_text"] = re.sub(unicode_ignore_regex, '', batch["target_text"])
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result on custom dataset**: 17.23 %
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("skylord/wav2vec2-large-xlsr-hindi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\'\;\:\"\“\%\‘\”\�Utrnle\_]'
unicode_ignore_regex = r'[dceMaWpmFui\xa0\u200d]' # Some unwanted unicode chars
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result on CommonVoice**: 56.46 %
## Training
The Common Voice `train`, `validation`, datasets were used for training as well as
The script used for training & wandb dashboard can be found [here](https://wandb.ai/thinkevolve/huggingface/reports/Project-Hindi-XLSR-Large--Vmlldzo2MTI2MTQ)
| 24abc1e3825dbb543766bc6339d297cd |
Geotrend/bert-base-zh-cased | Geotrend | bert | 8 | 6 | transformers | 0 | fill-mask | true | true | true | apache-2.0 | ['zh'] | ['wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,283 | false |
# bert-base-zh-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-zh-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
| 2477a2a49ea549589bd3112161959157 |
kejian/final-mle-again | kejian | gpt2 | 140 | 1 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['kejian/codeparrot-train-more-filter-3.3b-cleaned'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,110 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-mle-again
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 128,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-mle-again',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/u7vbiehz | 1ee8887258f10ee41dd9bb3f4f26cf26 |
sd-concepts-library/mizkif | sd-concepts-library | null | 8 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,159 | false | ### Mizkif on Stable Diffusion
This is the `<mizkif>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
<br>
<h3>here are some images i rendered with this model</h3>
<span>graffiti wall</span>
<img src="https://i.imgur.com/PIq7Y0w.png" alt="graffiti wall" width="200"/>
<span>stained glass</span>
<img src="https://i.imgur.com/QcwB5GF.png" alt="stained glass" width="200"/>
<br>
<h3>here are the images i used to train the model</h3>
![<mizkif> 0](https://huggingface.co/sd-concepts-library/mizkif/resolve/main/concept_images/1.jpeg)
![<mizkif> 1](https://huggingface.co/sd-concepts-library/mizkif/resolve/main/concept_images/0.jpeg)
![<mizkif> 2](https://huggingface.co/sd-concepts-library/mizkif/resolve/main/concept_images/2.jpeg)
| 7f68e379ff2a71fcb8749d40df97e90e |
nvidia/slu_conformer_transformer_large_slurp | nvidia | null | 3 | 1 | nemo | 0 | null | true | false | false | cc-by-4.0 | ['en'] | ['SLURP'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spoken-language-understanding', 'speech-intent-classification', 'speech-slot-filling', 'SLURP', 'Conformer', 'Transformer', 'pytorch', 'NeMo'] | true | true | true | 4,644 | false | # NeMo End-to-End Speech Intent Classification and Slot Filling
## Model Overview
This model performs joint intent classification and slot filling, directly from audio input. The model treats the problem as an audio-to-text problem, where the output text is the flattened string representation of the semantics annotation. The model is trained on the SLURP dataset [1].
## Model Architecture
The model is has an encoder-decoder architecture, where the encoder is a Conformer-Large model [2], and the decoder is a three-layer Transformer Decoder [3]. We use the Conformer encoder pretrained on NeMo ASR-Set (details [here](https://ngc.nvidia.com/models/nvidia:nemo:stt_en_conformer_ctc_large)), while the decoder is trained from scratch. A start-of-sentence (BOS) and an end-of-sentence (EOS) tokens are added to each sentence. The model is trained end-to-end by minimizing the negative log-likelihood loss with teacher forcing. During inference, the prediction is generated by beam search, where a BOS token is used to trigger the generation process.
## Training
The NeMo toolkit [4] was used for training the models for around 100 epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/slurp/run_slurp_train.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/slurp/configs/conformer_transformer_large_bpe.yaml).
The tokenizers for these models were built using the semantics annotations of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). We use a vocabulary size of 58, including the BOS, EOS and padding tokens.
Details on how to train the model can be found [here](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/speech_intent_slot/README.md).
### Datasets
The model is trained on the combined real and synthetic training sets of the SLURP dataset.
## Performance
| | | | | **Intent (Scenario_Action)** | | **Entity** | | | **SLURP Metrics** | |
|-------|--------------------------------------------------|----------------|--------------------------|------------------------------|---------------|------------|--------|--------------|-------------------|---------------------|
|**Version**| **Model** | **Params (M)** | **Pretrained** | **Accuracy** | **Precision** | **Recall** | **F1** | **Precsion** | **Recall** | **F1** |
|1.13.0| Conformer-Transformer-Large | 127 | NeMo ASR-Set 3.0 | 90.14 | 78.95 | 74.93 | 76.89 | 84.31 | 80.33 | 82.27 |
|Baseline| Conformer-Transformer-Large | 127 | None | 72.56 | 43.19 | 43.5 | 43.34 | 53.59 | 53.92 | 53.76 |
Note: during inference, we use beam size of 32, and a temperature of 1.25.
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used on another dataset with the same annotation format.
### Automatically load the model from NGC
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.SLUIntentSlotBPEModel.from_pretrained(model_name="slu_conformer_transformer_large_slurp")
```
### Predict intents and slots with this model
```shell
python [NEMO_GIT_FOLDER]/examples/slu/speech_intent_slot/eval_utils/inference.py \
pretrained_name="slu_conformer_transformer_slurp" \
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" \
sequence_generator.type="<'beam' OR 'greedy' FOR BEAM/GREEDY SEARCH>" \
sequence_generator.beam_size="<SIZE OF BEAM>" \
sequence_generator.temperature="<TEMPERATURE FOR BEAM SEARCH>"
```
### Input
This model accepts 16000 Hz Mono-channel Audio (wav files) as input.
### Output
This model provides the intent and slot annotaions as a string for a given audio sample.
## Limitations
Since this model was trained on only the SLURP dataset [1], the performance of this model might degrade on other datasets.
## References
[1] [SLURP: A Spoken Language Understanding Resource Package](https://arxiv.org/abs/2011.13205)
[2] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[3] [Attention Is All You Need](https://arxiv.org/abs/1706.03762?context=cs)
[4] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
| 3405bac983fc82641694e216c50a51bb |
gayanin/bart-mlm-paraphrasing | gayanin | bart | 12 | 2 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,383 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-paraphrasing
This model is a fine-tuned version of [gayanin/bart-mlm-pubmed](https://huggingface.co/gayanin/bart-mlm-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4617
- Rouge2 Precision: 0.8361
- Rouge2 Recall: 0.6703
- Rouge2 Fmeasure: 0.7304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.4845 | 1.0 | 1325 | 0.4270 | 0.8332 | 0.6701 | 0.7294 |
| 0.3911 | 2.0 | 2650 | 0.4195 | 0.8358 | 0.6713 | 0.7313 |
| 0.328 | 3.0 | 3975 | 0.4119 | 0.8355 | 0.6706 | 0.7304 |
| 0.2783 | 4.0 | 5300 | 0.4160 | 0.8347 | 0.6678 | 0.7284 |
| 0.2397 | 5.0 | 6625 | 0.4329 | 0.8411 | 0.6747 | 0.7351 |
| 0.2155 | 6.0 | 7950 | 0.4389 | 0.8382 | 0.6716 | 0.7321 |
| 0.1888 | 7.0 | 9275 | 0.4432 | 0.838 | 0.6718 | 0.7323 |
| 0.1724 | 8.0 | 10600 | 0.4496 | 0.8381 | 0.6714 | 0.7319 |
| 0.1586 | 9.0 | 11925 | 0.4575 | 0.8359 | 0.6704 | 0.7303 |
| 0.1496 | 10.0 | 13250 | 0.4617 | 0.8361 | 0.6703 | 0.7304 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| 847497846ec75aeef6d928fc0a770908 |
NimaBoscarino/unicorn_track_r50_mask | NimaBoscarino | null | 3 | 0 | null | 0 | object-detection | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'object-tracking', 'video', 'video-object-segmentation'] | false | true | true | 1,491 | false |
# unicorn_track_r50_mask
## Table of Contents
- [unicorn_track_r50_mask](#-model_id--defaultmymodelname-true)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Evaluation Results](#evaluation-results)
<model_details>
## Model Details
Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 800x1280.
- License: This model is licensed under the MIT license
- Resources for more information:
- [Research Paper](https://arxiv.org/abs/2111.12085)
- [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn)
</model_details>
<uses>
## Uses
#### Direct Use
This model can be used for:
* Single Object Tracking (SOT)
* Multiple Object Tracking (MOT)
* Video Object Segmentation (VOS)
* Multi-Object Tracking and Segmentation (MOTS)
<Eval_Results>
## Evaluation Results
LaSOT AUC (%): 65.3
BDD100K mMOTA (%): 35.1
DAVIS17 J&F (%): 66.2
BDD100K MOTS mMOTSA (%): 30.8
</Eval_Results>
<Cite>
## Citation Information
```bibtex
@inproceedings{unicorn,
title={Towards Grand Unification of Object Tracking},
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={ECCV},
year={2022}
}
```
</Cite> | c525a415f3a23230f8db8b6615fbe8ef |
heyal/finetuning-sentiment-model-3000-samples | heyal | distilbert | 15 | 11 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,056 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3861
- Accuracy: 0.8675
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| f864f747d1651fab11554f0e7ae18ec4 |
rishabhjain16/whisper-ft-test | rishabhjain16 | whisper | 48 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,468 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5636
- Wer: 13.4646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1789 | 4.02 | 1000 | 0.3421 | 13.1199 |
| 0.0264 | 8.04 | 2000 | 0.4579 | 13.5155 |
| 0.0023 | 13.01 | 3000 | 0.5479 | 13.6539 |
| 0.0011 | 17.03 | 4000 | 0.5636 | 13.4646 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.12.1
| ff435bcb72013848f41d166373893723 |
patrickvonplaten/sew-small-100k-timit | patrickvonplaten | sew | 20 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['timit_asr'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'timit_asr', 'generated_from_trainer'] | true | true | true | 2,962 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-small-100k-timit
This model is a fine-tuned version of [asapp/sew-small-100k](https://huggingface.co/asapp/sew-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4926
- Wer: 0.2988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.071 | 0.69 | 100 | 3.0262 | 1.0 |
| 2.9304 | 1.38 | 200 | 2.9297 | 1.0 |
| 2.8823 | 2.07 | 300 | 2.8367 | 1.0 |
| 1.5668 | 2.76 | 400 | 1.2310 | 0.8807 |
| 0.7422 | 3.45 | 500 | 0.7080 | 0.5957 |
| 0.4121 | 4.14 | 600 | 0.5829 | 0.5073 |
| 0.3981 | 4.83 | 700 | 0.5153 | 0.4461 |
| 0.5038 | 5.52 | 800 | 0.4908 | 0.4151 |
| 0.2899 | 6.21 | 900 | 0.5122 | 0.4111 |
| 0.2198 | 6.9 | 1000 | 0.4908 | 0.3803 |
| 0.2129 | 7.59 | 1100 | 0.4668 | 0.3789 |
| 0.3007 | 8.28 | 1200 | 0.4788 | 0.3562 |
| 0.2264 | 8.97 | 1300 | 0.5113 | 0.3635 |
| 0.1536 | 9.66 | 1400 | 0.4950 | 0.3441 |
| 0.1206 | 10.34 | 1500 | 0.5062 | 0.3421 |
| 0.2021 | 11.03 | 1600 | 0.4900 | 0.3283 |
| 0.1458 | 11.72 | 1700 | 0.5019 | 0.3307 |
| 0.1151 | 12.41 | 1800 | 0.4989 | 0.3270 |
| 0.0985 | 13.1 | 1900 | 0.4925 | 0.3173 |
| 0.1412 | 13.79 | 2000 | 0.4868 | 0.3125 |
| 0.1579 | 14.48 | 2100 | 0.4983 | 0.3147 |
| 0.1043 | 15.17 | 2200 | 0.4914 | 0.3091 |
| 0.0773 | 15.86 | 2300 | 0.4858 | 0.3102 |
| 0.1327 | 16.55 | 2400 | 0.5084 | 0.3064 |
| 0.1281 | 17.24 | 2500 | 0.5017 | 0.3025 |
| 0.0845 | 17.93 | 2600 | 0.5001 | 0.3012 |
| 0.0717 | 18.62 | 2700 | 0.4894 | 0.3004 |
| 0.0835 | 19.31 | 2800 | 0.4963 | 0.2998 |
| 0.1181 | 20.0 | 2900 | 0.4926 | 0.2988 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
| 1fbb5186003e897ab7d8e72f4baff931 |
tomekkorbak/elegant_galileo | tomekkorbak | null | 2 | 0 | null | 0 | null | false | false | false | mit | ['en'] | ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 7,878 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# elegant_galileo
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.000286,
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'},
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'elegant_galileo',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/283v5dho | 74a0b90b0321156d5b7c9ede33169fed |
michelecafagna26/vinvl_vg_x152c4 | michelecafagna26 | null | 4 | 0 | pytorch | 0 | feature-extraction | true | false | false | mit | null | ['coco', 'openimagesv5', 'bbjects365v1', 'visualgenome'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'feature-extraction'] | false | true | true | 1,743 | false |
# Model Card: VinVL VisualBackbone
Disclaimer: The model is taken from the official repository, it can be found here: [microsoft/scene_graph_benchmark](https://github.com/microsoft/scene_graph_benchmark)
# Usage:
More info about how to use this model can be found here: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)
# Quick start: Feature extraction
```python
from scene_graph_benchmark.wrappers import VinVLVisualBackbone
img_file = "scene_graph_bechmark/demo/woman_fish.jpg"
detector = VinVLVisualBackbone()
dets = detector(img_file)
```
`dets` contains the following keys: ["boxes", "classes", "scores", "features", "spatial_features"]
You can obtain the full VinVL's visual features by concatenating the "features" and the "spatial_features"
```python
import numpy as np
v_feats = np.concatenate((dets['features'], dets['spatial_features']), axis=1)
# v_feats.shape = (num_boxes, 2054)
```
# Citations
Please consider citing the original project and the VinVL paper
```BibTeX
@misc{han2021image,
title={Image Scene Graph Generation (SGG) Benchmark},
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
year={2021},
eprint={2107.12604},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
title={Vinvl: Revisiting visual representations in vision-language models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5579--5588},
year={2021}
}
```
| df5c02623cfee8046f8e4a664b43a90e |
lckidwell/album-cover-style | lckidwell | null | 39 | 37 | diffusers | 3 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 3,265 | false | ### Album-Cover-Style Dreambooth model
> trained by lckidwell with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Trained on ~80 album covers, mostly from the 50s and 60s, a mix of Jazz, pop, polka, religious, children's and other genres.
## Sample Prompts:
* Kanye plays jazz, albumcover style
* Swingin' with Henry Kissinger, albumcover style
* Jay Z Children's album, albumcover style
* Polka Party with Machine Gun Kelly, albumcover style
## Sample pictures of this concept:
![0](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02503-4178330406-Swingin'_with_Henry_Kissinger,_albumcover_style.png)
![2](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02512-2122051129-Polka_Party_with_Henry_Kissinger,_albumcover_style.png)
![3](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02493-407743854-Kanye_goes_country,_albumcover_style.png)
![4](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02387-1542142160-albumcover_style,_albumcover_style.png)
![5](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02521-1024797607-Polka_Party_with_Henry_Kissinger_and_Weird_Al,_albumcover_style.png)
![6](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02491-407743852-Kanye_goes_country,_albumcover_style.png)
![7](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02509-4178330412-Swingin'_with_Henry_Kissinger,_albumcover_style.png)
![8](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02529-3942483747-Jayz_Childrens_Album,_albumcover_style.png)
![9](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02507-4178330410-Swingin'_with_Henry_Kissinger,_albumcover_style.png)
![10](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02395-1542142168-albumcover_style,_albumcover_style.png)
![11](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02494-1810968449-Kanye_plays_Jazz,_albumcover_style.png)
![12](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02537-2335869042-Polka_Party_with_Machine_Gun_Kelly,_albumcover_style.png)
![13](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02412-1542142185-albumcover_style,_albumcover_style.png)
![14](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/02403-1542142176-albumcover_style,_albumcover_style.png)
## Moar Samples
![0](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/00095.png)
![1](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/00101.png)
![2](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/00104.png)
![3](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/00111.png)
![4](https://huggingface.co/lckidwell/album-cover-style/resolve/main/sample_images/00113.png)
| dad04cd04f18165140c2e950e5bfbca7 |
megantosh/flair-arabic-multi-ner | megantosh | null | 7 | 651 | flair | 0 | token-classification | true | false | false | apache-2.0 | ['ar', 'en'] | ['AQMAR', 'ANERcorp'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model'] | false | true | true | 5,324 | false | # Arabic NER Model using Flair Embeddings
Training was conducted over 94 epochs, using a linear decaying learning rate of 2e-05, starting from 0.225 and a batch size of 32 with GloVe and Flair forward and backward embeddings.
## Original Datasets:
- [AQMAR](http://www.cs.cmu.edu/~ark/ArabicNER/)
- [ANERcorp](http://curtis.ml.cmu.edu/w/courses/index.php/ANERcorp)
## Results:
- F1-score (micro) 0.8666
- F1-score (macro) 0.8488
| | Named Entity Type | True Posititves | False Positives | False Negatives | Precision | Recall | class-F1 |
|------|-|----|----|----|-----------|--------|----------|
| LOC | Location| 539 | 51 | 68 | 0.9136 | 0.8880 | 0.9006 |
| MISC | Miscellaneous|408 | 57 | 89 | 0.8774 | 0.8209 | 0.8482 |
| ORG | Organisation|167 | 43 | 64 | 0.7952 | 0.7229 | 0.7574 |
| PER | Person (no title)|501 | 65 | 60 | 0.8852 | 0.8930 | 0.8891 |
---
# Usage
```python
from flair.data import Sentence
from flair.models import SequenceTagger
import pyarabic.araby as araby
from icecream import ic
tagger = SequenceTagger.load("julien-c/flair-ner")
arTagger = SequenceTagger.load('megantosh/flair-arabic-multi-ner')
sentence = Sentence('George Washington went to Washington .')
arSentence = Sentence('عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة .')
# predict NER tags
tagger.predict(sentence)
arTagger.predict(arSentence)
# print sentence with predicted tags
ic(sentence.to_tagged_string)
ic(arSentence.to_tagged_string)
```
# Example
```bash
2021-07-07 14:30:59,649 loading file /Users/mega/.flair/models/flair-ner/f22eb997f66ae2eacad974121069abaefca5fe85fce71b49e527420ff45b9283.941c7c30b38aef8d8a4eb5c1b6dd7fe8583ff723fef457382589ad6a4e859cfc
2021-07-07 14:31:04,654 loading file /Users/mega/.flair/models/flair-arabic-multi-ner/c7af7ddef4fdcc681fcbe1f37719348afd2862b12aa1cfd4f3b93bd2d77282c7.242d030cb106124f7f9f6a88fb9af8e390f581d42eeca013367a86d585ee6dd6
ic| sentence.to_tagged_string: <bound method Sentence.to_tagged_string of Sentence: "George Washington went to Washington ." [− Tokens: 6 − Token-Labels: "George <B-PER> Washington <E-PER> went to Washington <S-LOC> ."]>
ic| arSentence.to_tagged_string: <bound method Sentence.to_tagged_string of Sentence: "عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة ." [− Tokens: 11 − Token-Labels: "عمرو <B-PER> عادلي <I-PER> أستاذ للاقتصاد السياسي المساعد في الجامعة <B-ORG> الأمريكية <I-ORG> بالقاهرة <B-LOC> ."]>
ic| entity: <PER-span (1,2): "George Washington">
ic| entity: <LOC-span (5): "Washington">
ic| entity: <PER-span (1,2): "عمرو عادلي">
ic| entity: <ORG-span (8,9): "الجامعة الأمريكية">
ic| entity: <LOC-span (10): "بالقاهرة">
ic| sentence.to_dict(tag_type='ner'):
{"text":"عمرو عادلي أستاذ للاقتصاد السياسي المساعد في الجامعة الأمريكية بالقاهرة .",
"labels":[],
{"entities":[{{{
"text":"عمرو عادلي",
"start_pos":0,
"end_pos":10,
"labels":[PER (0.9826)]},
{"text":"الجامعة الأمريكية",
"start_pos":45,
"end_pos":62,
"labels":[ORG (0.7679)]},
{"text":"بالقاهرة",
"start_pos":64,
"end_pos":72,
"labels":[LOC (0.8079)]}]}
"text":"George Washington went to Washington .",
"labels":[],
"entities":[{
{"text":"George Washington",
"start_pos":0,
"end_pos":17,
"labels":[PER (0.9968)]},
{"text":"Washington""start_pos":26,
"end_pos":36,
"labels":[LOC (0.9994)]}}]}
```
# Model Configuration
```python
SequenceTagger(
(embeddings): StackedEmbeddings(
(list_embedding_0): WordEmbeddings('glove')
(list_embedding_1): FlairEmbeddings(
(lm): LanguageModel(
(drop): Dropout(p=0.1, inplace=False)
(encoder): Embedding(7125, 100)
(rnn): LSTM(100, 2048)
(decoder): Linear(in_features=2048, out_features=7125, bias=True)
)
)
(list_embedding_2): FlairEmbeddings(
(lm): LanguageModel(
(drop): Dropout(p=0.1, inplace=False)
(encoder): Embedding(7125, 100)
(rnn): LSTM(100, 2048)
(decoder): Linear(in_features=2048, out_features=7125, bias=True)
)
)
)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(embedding2nn): Linear(in_features=4196, out_features=4196, bias=True)
(rnn): LSTM(4196, 256, batch_first=True, bidirectional=True)
(linear): Linear(in_features=512, out_features=15, bias=True)
(beta): 1.0
(weights): None
(weight_tensor) None
```
Due to the right-to-left in left-to-right context, some formatting errors might occur. and your code might appear like [this](https://ibb.co/ky20Lnq), (link accessed on 2020-10-27)
# Citation
*if you use this model, please consider citing [this work](https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects):*
```latex
@unpublished{MMHU21
author = "M. Megahed",
title = "Sequence Labeling Architectures in Diglossia",
year = {2021},
doi = "10.13140/RG.2.2.34961.10084"
url = {https://www.researchgate.net/publication/358956953_Sequence_Labeling_Architectures_in_Diglossia_-_a_case_study_of_Arabic_and_its_dialects}
}
``` | 8335cf7be992f88089988920d4925f96 |
fathyshalab/all-roberta-large-v1-work-1-16-5 | fathyshalab | roberta | 11 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,509 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-work-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3586
- Accuracy: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 |
| 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 |
| 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 |
| 1.5539 | 4.0 | 4 | 2.3874 | 0.36 |
| 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| e3349f97225f575ac0200cbafe4edf49 |
SeNSiTivE/Learning-sentiment-analysis-through-imdb-ds | SeNSiTivE | distilbert | 13 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,059 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Learning-sentiment-analysis-through-imdb-ds
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3419
- Accuracy: 0.8767
- F1: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 9964ad272bd39ebd2710fadab337a9a3 |
facebook/mask2former-swin-base-IN21k-cityscapes-semantic | facebook | mask2former | 5 | 17 | transformers | 0 | image-segmentation | true | false | false | other | null | ['coco'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['vision', 'image-segmentation'] | false | true | true | 2,932 | false |
# Mask2Former
Mask2Former model trained on Cityscapes semantic segmentation (base-IN21k, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png)
## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-IN21k-cityscapes-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-IN21k-cityscapes-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | e4ec1712254c5a4ae9419072a056f3fd |
edgertej/poebert-eras-balanced | edgertej | bert | 7 | 19 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,417 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# edgertej/poebert-eras-balanced
This model is a fine-tuned version of [edgertej/poebert-eras-balanced](https://huggingface.co/edgertej/poebert-eras-balanced) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.5715
- Validation Loss: 3.3710
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8002 | 3.5259 | 0 |
| 3.7486 | 3.4938 | 1 |
| 3.7053 | 3.4520 | 2 |
| 3.7315 | 3.4211 | 3 |
| 3.6226 | 3.4031 | 4 |
| 3.6021 | 3.3968 | 5 |
| 3.5715 | 3.3710 | 6 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
| b88c6e9ce578c9bf72af548ef462193e |
jonatasgrosman/exp_w2v2t_zh-cn_vp-es_s408 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['zh-CN'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'zh-CN'] | false | true | true | 475 | false | # exp_w2v2t_zh-cn_vp-es_s408
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 8d84c6e613ebc2557304ca14990ed654 |
CAMeL-Lab/bert-base-arabic-camelbert-da-ner | CAMeL-Lab | bert | 9 | 6 | transformers | 0 | token-classification | true | true | false | apache-2.0 | ['ar'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 3,684 | false | # CAMeLBERT-DA NER Model
## Model description
**CAMeLBERT-DA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
```python
>>> from camel_tools.ner import NERecognizer
>>> from camel_tools.tokenizers.word import simple_word_tokenize
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-da-ner')
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
>>> ner.predict_sentence(sentence)
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
```
You can also use the NER model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-da-ner')
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
[{'word': 'أبوظبي',
'score': 0.9895730018615723,
'entity': 'B-LOC',
'index': 2,
'start': 6,
'end': 12},
{'word': 'الإمارات',
'score': 0.8156259655952454,
'entity': 'B-LOC',
'index': 8,
'start': 33,
'end': 41},
{'word': 'العربية',
'score': 0.890906810760498,
'entity': 'I-LOC',
'index': 9,
'start': 42,
'end': 49},
{'word': 'المتحدة',
'score': 0.8169114589691162,
'entity': 'I-LOC',
'index': 10,
'start': 50,
'end': 57}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` | e32764503f8edbdf28c69da35d12f1e5 |
AmolSatsangi/t5-small-finetuned-xsum | AmolSatsangi | t5 | 14 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,252 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 125 | 2.8679 | 23.1742 | 9.8716 | 18.5896 | 20.7943 | 19.0 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 2f798e5a5c52a93228748713b8bbf346 |
tlttl/tluo_xml_roberta_base_amazon_review_sentiment_v2 | tlttl | xlm-roberta | 15 | 4 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,770 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tluo_xml_roberta_base_amazon_review_sentiment_v2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9630
- Accuracy: 0.6057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0561 | 0.33 | 5000 | 0.9954 | 0.567 |
| 0.948 | 0.67 | 10000 | 0.9641 | 0.5862 |
| 0.9557 | 1.0 | 15000 | 0.9605 | 0.589 |
| 0.8891 | 1.33 | 20000 | 0.9420 | 0.5875 |
| 0.8889 | 1.67 | 25000 | 0.9397 | 0.592 |
| 0.8777 | 2.0 | 30000 | 0.9236 | 0.6042 |
| 0.778 | 2.33 | 35000 | 0.9612 | 0.5972 |
| 0.7589 | 2.67 | 40000 | 0.9728 | 0.5995 |
| 0.7593 | 3.0 | 45000 | 0.9630 | 0.6057 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
| 282c1cb7690331e5711b569d33d850bd |
huynhdoo/camembert-base-finetuned-jva-missions-report | huynhdoo | camembert | 20 | 15 | transformers | 0 | text-classification | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,656 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# huynhdoo/camembert-base-finetuned-jva-missions-report
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0542
- Train Accuracy: 0.9844
- Validation Loss: 0.5073
- Validation Accuracy: 0.8436
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1005, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4753 | 0.7890 | 0.3616 | 0.8547 | 0 |
| 0.3120 | 0.8799 | 0.3702 | 0.8492 | 1 |
| 0.1824 | 0.9340 | 0.3928 | 0.8547 | 2 |
| 0.0972 | 0.9714 | 0.4849 | 0.8436 | 3 |
| 0.0542 | 0.9844 | 0.5073 | 0.8436 | 4 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
| 121533b00992cc583d74a4b7f2d93539 |
Laughify/sonic06-diffusion | Laughify | null | 22 | 13 | diffusers | 2 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 1,944 | false | ### Sonic06-Diffusion on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by Laughify
This is a fine-tuned Stable Diffusion model trained on screenshots from Sonic The Hedgehog 2006 game. Use saisikwrd in your prompts for the effect. **A4FD8847-3EFF-4770-BDDA-A9404B5D436E.png, EDCBECAC-0116-4170-8A43-16F2F8FCC119.png, 815527F3-1658-4655-B0A9-6F0C73CED7D7.png**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
815527F3-1658-4655-B0A9-6F0C73CED7D7.png
EDCBECAC-0116-4170-8A43-16F2F8FCC119.png
A4FD8847-3EFF-4770-BDDA-A9404B5D436E.png
![A4FD8847-3EFF-4770-BDDA-A9404B5D436E.png 0](https://huggingface.co/Laughify/sonic06-diffusion/resolve/main/concept_images/A4FD8847-3EFF-4770-BDDA-A9404B5D436E.png)
![EDCBECAC-0116-4170-8A43-16F2F8FCC119.png 1](https://huggingface.co/Laughify/sonic06-diffusion/resolve/main/concept_images/EDCBECAC-0116-4170-8A43-16F2F8FCC119.png)
![815527F3-1658-4655-B0A9-6F0C73CED7D7.png 2](https://huggingface.co/Laughify/sonic06-diffusion/resolve/main/concept_images/815527F3-1658-4655-B0A9-6F0C73CED7D7.png)
| 68ed4b66e42c982ee7a74987073073d4 |
laituan245/molt5-small | laituan245 | t5 | 8 | 51 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 502 | false | ## Example Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-small", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small')
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
| c6256066a1977d8f90541d3cdd1fb269 |
d0r1h/LEDBill | d0r1h | led | 15 | 5 | transformers | 0 | summarization | true | false | false | apache-2.0 | null | ['billsum'] | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 | ['summarization'] | true | true | true | 2,004 | false |
# Longformer Encoder-Decoder (LED) fine-tuned on Billsum
This model is a fine-tuned version of [led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [billsum](https://huggingface.co/datasets/billsum) dataset.
As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times.
## How to use
```Python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("d0r1h/LEDBill")
model = AutoModelForSeq2SeqLM.from_pretrained("d0r1h/LEDBill", return_dict_in_generate=True).to(device)
case = "......."
input_ids = tokenizer(case, return_tensors="pt").input_ids.to(device)
global_attention_mask = torch.zeros_like(input_ids)
global_attention_mask[:, 0] = 1
sequences = model.generate(input_ids,
global_attention_mask=global_attention_mask).sequences
summary = tokenizer.batch_decode(sequences,
skip_special_tokens=True)
```
## Evaluation results
When the model is used for summarizing Billsum documents(10 sample), it achieves the following results:
| Model | rouge1-f | rouge1-p | rouge2-f | rouge2-p | rougeL-f | rougeL-p |
|:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|
| LEDBill | **34** | **37** | **15** | **16** | **30** | **32** |
| led-base | 2 | 15 | 0 | 0 | 2 | 15 |
[This notebook](https://colab.research.google.com/drive/1iEEFbWeTGUSDesmxHIU2QDsPQM85Ka1K?usp=sharing) shows how *led* can effectively be used for downstream task such summarization.
| 6d41e3126e0ec9e9c59f26a8a02f4eb1 |
keras-sd/diffusion-model-tflite | keras-sd | null | 3 | 0 | keras | 0 | text-to-image | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['diffusion model', 'stable diffusion', 'v1.4'] | false | true | true | 1,274 | false |
This repository hosts the TFLite version of `diffusion model` part of [KerasCV Stable Diffusion](https://github.com/keras-team/keras-cv/tree/master/keras_cv/models/stable_diffusion).
Stable Diffusion consists of `text encoder`, `diffusion model`, `decoder`, and some glue codes to handl inputs and outputs of each part. The TFLite version of `diffusion model` in this repository is built not only with the `diffusion model` itself but also TensorFlow operations that takes `context`, `unconditional context` from `text encoder` and generates `latent`. The `latent` output should be passed down to the `decoder` which is hosted in [this repository](https://huggingface.co/keras-sd/decoder-tflite/tree/main).
TFLite conversion was based on the `SavedModel` from [this repository](https://huggingface.co/keras-sd/tfs-text-encoder/tree/main), and TensorFlow version `>= 2.12-nightly` was used.
- NOTE: [Dynamic range quantization](https://www.tensorflow.org/lite/performance/post_training_quant#optimizing_an_existing_model) was used.
- NOTE: TensorFlow version `< 2.12-nightly` will fail for the conversion process.
- NOTE: For those who wonder how `SavedModel` is constructed, find it in [keras-sd-serving repository](https://github.com/deep-diver/keras-sd-serving). | 71966148b79c1db2fa4b594e32a2b2c9 |
roscazo/gpt2-covid | roscazo | gpt2 | 8 | 4 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 965 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-covid
This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-base-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-base-bne) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
| 1d849eb79316916288730a8b8bc2cf31 |
jkhan447/sentiment-model-sample-ekman-emotion | jkhan447 | bert | 13 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,027 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-ekman-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4963
- Accuracy: 0.6713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| ef94c3b1e6434c8eddea7d67ecc53de3 |
sd-concepts-library/klance | sd-concepts-library | null | 11 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,188 | false | ### klance on Stable Diffusion
This is the `<klance>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
![<klance> 0](https://huggingface.co/sd-concepts-library/klance/resolve/main/concept_images/5.jpeg)
![<klance> 1](https://huggingface.co/sd-concepts-library/klance/resolve/main/concept_images/3.jpeg)
![<klance> 2](https://huggingface.co/sd-concepts-library/klance/resolve/main/concept_images/0.jpeg)
![<klance> 3](https://huggingface.co/sd-concepts-library/klance/resolve/main/concept_images/2.jpeg)
![<klance> 4](https://huggingface.co/sd-concepts-library/klance/resolve/main/concept_images/1.jpeg)
![<klance> 5](https://huggingface.co/sd-concepts-library/klance/resolve/main/concept_images/4.jpeg)
| 54321fb63b1ed1c3abdd81b4d593a001 |
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1 | SetFit | distilbert | 10 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,905 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1013
- Accuracy: 0.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0866 | 1.0 | 5 | 1.1363 | 0.0 |
| 1.0439 | 2.0 | 10 | 1.1803 | 0.0 |
| 1.0227 | 3.0 | 15 | 1.2162 | 0.2 |
| 0.9111 | 4.0 | 20 | 1.2619 | 0.0 |
| 0.8243 | 5.0 | 25 | 1.2929 | 0.2 |
| 0.7488 | 6.0 | 30 | 1.3010 | 0.2 |
| 0.62 | 7.0 | 35 | 1.3011 | 0.2 |
| 0.5054 | 8.0 | 40 | 1.2931 | 0.4 |
| 0.4191 | 9.0 | 45 | 1.3274 | 0.4 |
| 0.4107 | 10.0 | 50 | 1.3259 | 0.4 |
| 0.3376 | 11.0 | 55 | 1.2800 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 73bed2420e4faf33e189c921847ca4de |
projecte-aina/tts-ca-coqui-vits-multispeaker | projecte-aina | null | 12 | 1 | null | 0 | null | true | false | false | cc-by-4.0 | ['ca'] | ['mozilla-foundation/common_voice_8_0', 'openslr'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['TTS', 'audio', 'synthesis', 'VITS', 'speech', 'coqui.ai', 'pytorch'] | false | true | true | 5,602 | false |
# Aina Project's Catalan multi-speaker text-to-speech model
## Model description
This model was trained from scratch using the [Coqui TTS](https://github.com/coqui-ai/TTS) toolkit on a combination of 3 datasets: [Festcat](http://festcat.talp.cat/devel.php), high quality open speech dataset of [Google](http://openslr.org/69/) (can be found in [OpenSLR 69](https://huggingface.co/datasets/openslr/viewer/SLR69/train)) and [Common Voice v8](https://commonvoice.mozilla.org/ca). For the training, 101460 utterances consisting of 257 speakers were used, which corresponds to nearly 138 hours of speech.
A live inference demo can be found in our spaces, [here](https://huggingface.co/spaces/projecte-aina/tts-ca-coqui-vits-multispeaker).
## Intended uses and limitations
You can use this model to generate synthetic speech in Catalan with different voices.
## How to use
### Usage
Requiered libraries:
```bash
pip install git+https://github.com/coqui-ai/TTS@dev#egg=TTS
```
Synthesize a speech using python:
```bash
import tempfile
import gradio as gr
import numpy as np
import os
import json
from typing import Optional
from TTS.config import load_config
from TTS.utils.manage import ModelManager
from TTS.utils.synthesizer import Synthesizer
model_path = # Absolute path to the model checkpoint.pth
config_path = # Absolute path to the model config.json
speakers_file_path = # Absolute path to speakers.pth file
text = "Text to synthetize"
speaker_idx = "Speaker ID"
synthesizer = Synthesizer(
model_path, config_path, speakers_file_path, None, None, None,
)
wavs = synthesizer.tts(text, speaker_idx)
```
## Training
### Training Procedure
### Data preparation
The data has been processed using the script [process_data.sh](https://huggingface.co/projecte-aina/tts-ca-coqui-vits-multispeaker/blob/main/data_processing/process_data.sh), which reduces the sampling frequency of the audios, eliminates silences, adds padding and structures the data in the format accepted by the framework. You can find more information [here](https://huggingface.co/projecte-aina/tts-ca-coqui-vits-multispeaker/blob/main/data_processing/README.md).
### Hyperparameter
The model is based on VITS proposed by [Kim et al](https://arxiv.org/abs/2106.06103). The following hyperparameters were set in the coqui framework.
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Model | vits |
| Batch Size | 16 |
| Eval Batch Size | 8 |
| Mixed Precision | false |
| Window Length | 1024 |
| Hop Length | 256 |
| FTT size | 1024 |
| Num Mels | 80 |
| Phonemizer | espeak |
| Phoneme Lenguage | ca |
| Text Cleaners | multilingual_cleaners |
| Formatter | vctk_old |
| Optimizer | adam |
| Adam betas | (0.8, 0.99) |
| Adam eps | 1e-09 |
| Adam weight decay | 0.01 |
| Learning Rate Gen | 0.0001 |
| Lr. schedurer Gen | ExponentialLR |
| Lr. schedurer Gamma Gen | 0.999875 |
| Learning Rate Disc | 0.0001 |
| Lr. schedurer Disc | ExponentialLR |
| Lr. schedurer Gamma Disc | 0.999875 |
The model was trained for 730962 steps.
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
| f1af11dd54d6e3a85231eb6a04ee4b70 |
zates/distilbert-base-uncased-finetuned-squad-seed-420 | zates | distilbert | 11 | 3 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,244 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-seed-420
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4491 | 1.0 | 8248 | 2.1014 |
| 2.1388 | 2.0 | 16496 | 1.9590 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 54a011136864a2aafdc89a64418b7174 |
Helsinki-NLP/opus-mt-en-zlw | Helsinki-NLP | marian | 11 | 10 | transformers | 0 | translation | true | true | false | apache-2.0 | ['en', 'pl', 'cs', 'zlw'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 3,056 | false |
### eng-zlw
* source group: English
* target group: West Slavic languages
* OPUS readme: [eng-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md)
* model: transformer
* source language(s): eng
* target language(s): ces csb_Latn dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engces.eng.ces | 20.6 | 0.488 |
| news-test2008-engces.eng.ces | 18.3 | 0.466 |
| newstest2009-engces.eng.ces | 19.8 | 0.483 |
| newstest2010-engces.eng.ces | 19.8 | 0.486 |
| newstest2011-engces.eng.ces | 20.6 | 0.489 |
| newstest2012-engces.eng.ces | 18.6 | 0.464 |
| newstest2013-engces.eng.ces | 22.3 | 0.495 |
| newstest2015-encs-engces.eng.ces | 21.7 | 0.502 |
| newstest2016-encs-engces.eng.ces | 24.5 | 0.521 |
| newstest2017-encs-engces.eng.ces | 20.1 | 0.480 |
| newstest2018-encs-engces.eng.ces | 19.9 | 0.483 |
| newstest2019-encs-engces.eng.ces | 21.2 | 0.490 |
| Tatoeba-test.eng-ces.eng.ces | 43.7 | 0.632 |
| Tatoeba-test.eng-csb.eng.csb | 1.2 | 0.188 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.5 | 0.167 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.199 |
| Tatoeba-test.eng.multi | 42.8 | 0.632 |
| Tatoeba-test.eng-pol.eng.pol | 43.2 | 0.641 |
### System Info:
- hf_name: eng-zlw
- source_languages: eng
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pl', 'cs', 'zlw']
- src_constituents: {'eng'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zlw
- short_pair: en-zlw
- chrF2_score: 0.632
- bleu: 42.8
- brevity_penalty: 0.973
- ref_len: 65397.0
- src_name: English
- tgt_name: West Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: eng-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 1b732c59f9870d4d7da996e2dfd8040c |
tomthefreak/Deneuve-Station | tomthefreak | null | 3 | 0 | null | 3 | null | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,202 | false | Science Fiction space station textual embedding for Stable Diffusion 2.0.
This embedding is trained on 42 images from Marcel Deneuve's Artstation (https://www.artstation.com/marceldeneuve), then further tuned with an expanded dataset that includes 96 additional images generated with the initial embedding alongside specific prompting tailored to improving the quality.
Example generations:
![04405-461940410-Deneuve Station.png](https://s3.amazonaws.com/moonup/production/uploads/1670300627121-632799fd3476801d8f27a0b9.png)
_Prompt: "Deneuve Station" Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 461940410, Size: 768x768, Model hash: 2c02b20a_
![04412-2907310488-Deneuve Station.png](https://s3.amazonaws.com/moonup/production/uploads/1670300823006-632799fd3476801d8f27a0b9.png)
_Prompt: "Deneuve Station" Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2907310488, Size: 768x768, Model hash: 2c02b20a_
![04415-2937662716-Deneuve Station.png](https://s3.amazonaws.com/moonup/production/uploads/1670300993156-632799fd3476801d8f27a0b9.png)
_Prompt: "Deneuve Station" Steps: 10, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2937662716, Size: 768x768, Model hash: 2c02b20a_
| 701a520070e7062e6f998a115632cfa4 |
SiddharthaM/xlm-roberta-profane-final | SiddharthaM | xlm-roberta | 12 | 1 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,174 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-profane-final
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3272
- Accuracy: 0.9087
- Precision: 0.8411
- Recall: 0.8441
- F1: 0.8426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.2705 | 0.9030 | 0.8368 | 0.8192 | 0.8276 |
| 0.3171 | 2.0 | 592 | 0.2174 | 0.9192 | 0.8847 | 0.8204 | 0.8476 |
| 0.3171 | 3.0 | 888 | 0.2250 | 0.9202 | 0.8658 | 0.8531 | 0.8593 |
| 0.2162 | 4.0 | 1184 | 0.2329 | 0.9106 | 0.8422 | 0.8538 | 0.8478 |
| 0.2162 | 5.0 | 1480 | 0.2260 | 0.9183 | 0.8584 | 0.8584 | 0.8584 |
| 0.1766 | 6.0 | 1776 | 0.2638 | 0.9116 | 0.8409 | 0.8651 | 0.8522 |
| 0.146 | 7.0 | 2072 | 0.3088 | 0.9125 | 0.8494 | 0.8464 | 0.8478 |
| 0.146 | 8.0 | 2368 | 0.2873 | 0.9154 | 0.8568 | 0.8459 | 0.8512 |
| 0.1166 | 9.0 | 2664 | 0.3227 | 0.9144 | 0.8518 | 0.8518 | 0.8518 |
| 0.1166 | 10.0 | 2960 | 0.3272 | 0.9087 | 0.8411 | 0.8441 | 0.8426 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
| 8016a69e30e30d831753b3087d734878 |
jiseong/mt5-small-finetuned-news | jiseong | mt5 | 12 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,264 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jiseong/mt5-small-finetuned-news
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1208
- Validation Loss: 0.1012
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1829 | 0.1107 | 0 |
| 0.1421 | 0.1135 | 1 |
| 0.1208 | 0.1012 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| c200c783e8cc478e4aaecc8e163ea84a |
Helsinki-NLP/opus-mt-es-csn | Helsinki-NLP | marian | 10 | 8 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-es-csn
* source languages: es
* target languages: csn
* OPUS readme: [es-csn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-csn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.csn | 87.8 | 0.901 |
| 46e07a59876aa6bb0401e83d7ff72b1c |
adache/distilbert-base-uncased-finetuned-emotion | adache | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,326 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9245
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8398 | 1.0 | 250 | 0.3276 | 0.9005 | 0.8966 |
| 0.2541 | 2.0 | 500 | 0.2270 | 0.9245 | 0.9249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
| 6639d3a0d3636f29dceee3f22a071000 |
moghis/xlm-roberta-base-finetuned-panx-it | moghis | xlm-roberta | 10 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,335 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
- F1 Score: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7058 | 1.0 | 70 | 0.3183 | 0.7480 |
| 0.2808 | 2.0 | 140 | 0.2647 | 0.8070 |
| 0.1865 | 3.0 | 210 | 0.2380 | 0.8289 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 5b589b09fd91a6df28a321f96fbb6352 |
nasuka/distilbert-base-uncased-finetuned-emotion | nasuka | distilbert | 12 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,326 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2057
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8084 | 1.0 | 250 | 0.2883 | 0.9125 | 0.9110 |
| 0.2371 | 2.0 | 500 | 0.2057 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.13.2
| 838717df1976dca304340d45e4297951 |
gokuls/distilbert_add_GLUE_Experiment_qqp | gokuls | distilbert | 17 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,045 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_qqp
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4050
- Accuracy: 0.8320
- F1: 0.7639
- Combined Score: 0.7979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5406 | 1.0 | 1422 | 0.4844 | 0.7648 | 0.6276 | 0.6962 |
| 0.4161 | 2.0 | 2844 | 0.4451 | 0.8044 | 0.6939 | 0.7491 |
| 0.3079 | 3.0 | 4266 | 0.4050 | 0.8320 | 0.7639 | 0.7979 |
| 0.2338 | 4.0 | 5688 | 0.4633 | 0.8388 | 0.7715 | 0.8052 |
| 0.1801 | 5.0 | 7110 | 0.5597 | 0.8346 | 0.7489 | 0.7918 |
| 0.1433 | 6.0 | 8532 | 0.5641 | 0.8460 | 0.7774 | 0.8117 |
| 0.1155 | 7.0 | 9954 | 0.5940 | 0.8481 | 0.7889 | 0.8185 |
| 0.0963 | 8.0 | 11376 | 0.6896 | 0.8438 | 0.7670 | 0.8054 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
| d774fa185cda7c18f4f2ea508c99bfe5 |
wietsedv/xlm-roberta-base-ft-udpos28-eu | wietsedv | xlm-roberta | 8 | 15 | transformers | 0 | token-classification | true | false | false | apache-2.0 | ['eu'] | ['universal_dependencies'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['part-of-speech', 'token-classification'] | true | true | true | 566 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Basque
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-eu")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-eu")
```
| 3d421648cc70b2734472abfc89b0a1b8 |
RavenK/distilbert-base-uncased-finetuned-ner | RavenK | distilbert | 13 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,555 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9274
- Recall: 0.9370
- F1: 0.9322
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2431 | 1.0 | 878 | 0.0690 | 0.9174 | 0.9214 | 0.9194 | 0.9811 |
| 0.0525 | 2.0 | 1756 | 0.0606 | 0.9251 | 0.9348 | 0.9299 | 0.9830 |
| 0.0299 | 3.0 | 2634 | 0.0602 | 0.9274 | 0.9370 | 0.9322 | 0.9839 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
| d6b945d348bf6472ae3a578c3100b888 |
cwchengtw/wav2vec2-large-xls-r-300m-turkish-colab | cwchengtw | wav2vec2 | 13 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3873
- Wer: 0.3224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0846 | 3.67 | 400 | 0.7488 | 0.7702 |
| 0.4487 | 7.34 | 800 | 0.4428 | 0.5255 |
| 0.1926 | 11.01 | 1200 | 0.4218 | 0.4667 |
| 0.1302 | 14.68 | 1600 | 0.3957 | 0.4269 |
| 0.0989 | 18.35 | 2000 | 0.4321 | 0.4085 |
| 0.0748 | 22.02 | 2400 | 0.4067 | 0.3904 |
| 0.0615 | 25.69 | 2800 | 0.3914 | 0.3557 |
| 0.0485 | 29.36 | 3200 | 0.3873 | 0.3224 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| cdcf517079961a42e3e8962ba264bdda |
tkubotake/xlm-roberta-base-finetuned-panx-de-fr | tkubotake | xlm-roberta | 9 | 8 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,376 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1829
- F1: 0.8671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.158 | 1.0 | 715 | 0.1689 | 0.8471 |
| 0.099 | 2.0 | 1430 | 0.1781 | 0.8576 |
| 0.0599 | 3.0 | 2145 | 0.1829 | 0.8671 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 5989de8ad8cf62cbdfc12e04474c3816 |
jbreunig/xlm-roberta-base-finetuned-panx-fr | jbreunig | xlm-roberta | 10 | 5 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2661
- F1: 0.8422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5955 | 1.0 | 191 | 0.3344 | 0.7932 |
| 0.2556 | 2.0 | 382 | 0.2923 | 0.8252 |
| 0.1741 | 3.0 | 573 | 0.2661 | 0.8422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1adbe000c3fd64d67fff10cd7520d69f |
Helsinki-NLP/opus-mt-tc-big-it-en | Helsinki-NLP | marian | 13 | 2,340 | transformers | 1 | translation | true | true | false | cc-by-4.0 | ['en', 'it'] | null | null | 5 | 3 | 1 | 1 | 0 | 0 | 0 | ['translation', 'opus-mt-tc'] | true | true | true | 5,407 | false | # opus-mt-tc-big-it-en
Neural machine translation model for translating from Italian (it) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-02-25
* source language(s): ita
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
* more information released models: [OPUS-MT ita-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"So chi è il mio nemico.",
"Tom è illetterato; non capisce assolutamente nulla."
]
model_name = "pytorch-models/opus-mt-tc-big-it-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# I know who my enemy is.
# Tom is illiterate; he understands absolutely nothing.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-it-en")
print(pipe("So chi è il mio nemico."))
# expected output: I know who my enemy is.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ita-eng | tatoeba-test-v2021-08-07 | 0.82288 | 72.1 | 17320 | 119214 |
| ita-eng | flores101-devtest | 0.62115 | 32.8 | 1012 | 24721 |
| ita-eng | newssyscomb2009 | 0.59822 | 34.4 | 502 | 11818 |
| ita-eng | newstest2009 | 0.59646 | 34.3 | 2525 | 65399 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:40:08 EEST 2022
* port machine: LM0-400-22516.local
| 9a42a151a38509fb31bbbaab3501558b |
marifulhaque/wav2vec2-large-teacher-base-student-en-asr-timit | marifulhaque | wav2vec2 | 12 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,781 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-teacher-base-student-en-asr-timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 73.5882
- Wer: 0.3422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 920.6083 | 3.17 | 200 | 1256.0675 | 1.0 |
| 660.5993 | 6.35 | 400 | 717.6098 | 0.9238 |
| 336.5288 | 9.52 | 600 | 202.0025 | 0.5306 |
| 131.3178 | 12.7 | 800 | 108.0701 | 0.4335 |
| 73.4232 | 15.87 | 1000 | 90.2797 | 0.3728 |
| 54.9439 | 19.05 | 1200 | 76.9043 | 0.3636 |
| 44.6595 | 22.22 | 1400 | 79.2443 | 0.3550 |
| 38.6381 | 25.4 | 1600 | 73.6277 | 0.3493 |
| 35.074 | 28.57 | 1800 | 73.5882 | 0.3422 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
| 53eb07ce3b2f808b2b85da44a2f65763 |
hassnain/wav2vec2-base-timit-demo-colab66 | hassnain | wav2vec2 | 12 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,669 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab66
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2675
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.3521 | 7.04 | 500 | 3.3666 | 1.0 |
| 3.1768 | 14.08 | 1000 | 3.3977 | 1.0 |
| 3.1576 | 21.13 | 1500 | 3.2332 | 1.0 |
| 3.1509 | 28.17 | 2000 | 3.2686 | 1.0 |
| 3.149 | 35.21 | 2500 | 3.2550 | 1.0 |
| 3.1478 | 42.25 | 3000 | 3.2689 | 1.0 |
| 3.1444 | 49.3 | 3500 | 3.2848 | 1.0 |
| 3.1442 | 56.34 | 4000 | 3.2675 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| 7a1b31dd9697b7881e8a930518e4dda7 |
ykleeee/wav2vec2-10epochs-3e3 | ykleeee | wav2vec2 | 9 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,901 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-10epochs-3e3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7024
- Wer: 0.6481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0938 | 0.36 | 100 | 0.4842 | 0.3227 |
| 0.67 | 0.72 | 200 | 0.7219 | 0.5669 |
| 0.7133 | 1.08 | 300 | 1.0698 | 0.7080 |
| 1.0312 | 1.44 | 400 | 1.2692 | 0.8953 |
| 1.2162 | 1.8 | 500 | 1.4763 | 1.0443 |
| 1.2401 | 2.16 | 600 | 1.4906 | 0.8694 |
| 1.2022 | 2.52 | 700 | 1.3686 | 0.9518 |
| 1.154 | 2.88 | 800 | 1.1618 | 0.9109 |
| 1.0467 | 3.24 | 900 | 1.2007 | 0.8602 |
| 1.1785 | 3.6 | 1000 | 1.2000 | 0.9160 |
| 0.979 | 3.96 | 1100 | 1.1464 | 0.8852 |
| 1.1421 | 4.32 | 1200 | 1.1117 | 0.9018 |
| 0.9622 | 4.68 | 1300 | 1.0976 | 0.8602 |
| 1.0939 | 5.04 | 1400 | 1.1126 | 0.8831 |
| 0.9414 | 5.4 | 1500 | 1.0134 | 0.8448 |
| 0.9433 | 5.76 | 1600 | 0.9320 | 0.7977 |
| 0.8389 | 6.12 | 1700 | 0.9013 | 0.7742 |
| 0.8838 | 6.47 | 1800 | 0.9088 | 0.7509 |
| 0.7907 | 6.83 | 1900 | 0.8581 | 0.7382 |
| 0.7704 | 7.19 | 2000 | 0.8300 | 0.7481 |
| 0.667 | 7.55 | 2100 | 0.8221 | 0.7349 |
| 0.6111 | 7.91 | 2200 | 0.7803 | 0.7102 |
| 0.5555 | 8.27 | 2300 | 0.8198 | 0.7314 |
| 0.4947 | 8.63 | 2400 | 0.8127 | 0.7036 |
| 0.4697 | 8.99 | 2500 | 0.7514 | 0.6805 |
| 0.402 | 9.35 | 2600 | 0.7348 | 0.6606 |
| 0.3682 | 9.71 | 2700 | 0.7024 | 0.6481 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 2.9.0
- Tokenizers 0.10.3
| eb4c74c0f006aa4eea93a36d8e741841 |
Stancld/long-t5-tglobal-large | Stancld | longt5 | 4 | 9 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 864 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-large
This model is a fine-tuned version of [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
| fb965732db69d9bbfb30d3321de1bde1 |
fathyshalab/massive_social-roberta-large-v1-3-7 | fathyshalab | roberta | 14 | 2 | sentence-transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | true | true | 1,460 | false |
# fathyshalab/massive_social-roberta-large-v1-3-7
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-3-7")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| e26d42a23ace19d90534b7d762551948 |
microsoft/MiniLM-L12-H384-uncased | microsoft | bert | 11 | 17,580 | transformers | 26 | text-classification | true | true | true | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-classification'] | false | true | true | 1,901 | false |
## MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation
MiniLM is a distilled model from the paper "[MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers](https://arxiv.org/abs/2002.10957)".
Please find the information about preprocessing, training and full details of the MiniLM in the [original MiniLM repository](https://github.com/microsoft/unilm/blob/master/minilm/).
Please note: This checkpoint can be an inplace substitution for BERT and it needs to be fine-tuned before use!
### English Pre-trained Models
We release the **uncased** **12**-layer model with **384** hidden size distilled from an in-house pre-trained [UniLM v2](/unilm) model in BERT-Base size.
- MiniLMv1-L12-H384-uncased: 12-layer, 384-hidden, 12-heads, 33M parameters, 2.7x faster than BERT-Base
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 2.0 and several GLUE benchmark tasks.
| Model | #Param | SQuAD 2.0 | MNLI-m | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |
|---------------------------------------------------|--------|-----------|--------|-------|------|------|------|------|------|
| [BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) | 109M | 76.8 | 84.5 | 93.2 | 91.7 | 58.9 | 68.6 | 87.3 | 91.3 |
| **MiniLM-L12xH384** | 33M | 81.7 | 85.7 | 93.0 | 91.5 | 58.5 | 73.3 | 89.5 | 91.3 |
### Citation
If you find MiniLM useful in your research, please cite the following paper:
``` latex
@misc{wang2020minilm,
title={MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers},
author={Wenhui Wang and Furu Wei and Li Dong and Hangbo Bao and Nan Yang and Ming Zhou},
year={2020},
eprint={2002.10957},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 6d48d5561663e4cb21a3715a7769d39d |