repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
laurauzcategui/xlm-roberta-base-finetuned-marc-en
laurauzcategui
xlm-roberta
12
3
transformers
0
text-classification
true
false
false
mit
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,259
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8945 - Mae: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:---:| | 1.1411 | 1.0 | 235 | 0.9358 | 0.5 | | 0.9653 | 2.0 | 470 | 0.8945 | 0.5 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
0166f70a52720cf0bc7b4fa4a79adf40
Likalto4/Unconditional_Butterflies_x64
Likalto4
null
6
0
diffusers
0
unconditional-image-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
true
true
677
false
# Model Card for a model trained based on the Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class), not using accelarate yet. This model is a diffusion model for unconditional image generation of cute but small 🦋. The model was trained with 1000 images using the [DDPM](https://arxiv.org/abs/2006.11239) architecture. Images generated are of 64x64 pixel size. The model was trained for 50 epochs with a batch size of 64, using around 11 GB of GPU memory. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained(Likalto4/Unconditional_Butterflies_x64) image = pipeline().images[0] image ```
9bc2148cef8e56ad2e88bb5778a62104
google/t5-efficient-small-dm1000
google
t5
12
11
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,264
false
# T5-Efficient-SMALL-DM1000 (Deep-Narrow version) T5-Efficient-SMALL-DM1000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-small-dm1000** - is of model type **Small** with the following variations: - **dm** is **1000** It has **121.03** million parameters and thus requires *ca.* **484.11 MB** of memory in full precision (*fp32*) or **242.05 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
044a6823239e1c767eb11d2c823a473e
pierreguillou/bert-large-cased-squad-v1.1-portuguese
pierreguillou
bert
8
554
transformers
19
question-answering
true
true
false
mit
['pt']
['brWaC', 'squad', 'squad_v1_pt']
null
0
0
0
0
1
1
0
['question-answering', 'bert', 'bert-large', 'pytorch']
false
true
true
5,329
false
# Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 ![Exemple of what can do the Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1](https://miro.medium.com/max/5256/1*QxyeAjT2V1OfE2B6nEcs3w.png) ## Introduction The model was trained on the dataset SQUAD v1.1 in portuguese from the [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/). The language model used is the [BERTimbau Large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) (aka "bert-large-portuguese-cased") from [Neuralmind.ai](https://neuralmind.ai/): BERTimbau is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. ## Informations on the method used All the informations are in the blog post : [NLP | Como treinar um modelo de Question Answering em qualquer linguagem baseado no BERT large, melhorando o desempenho do modelo utilizando o BERT base? (estudo de caso em português)](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96) ## Notebook in GitHub [question_answering_BERT_large_cased_squad_v11_pt.ipynb](https://github.com/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb) ([nbviewer version](https://nbviewer.jupyter.org/github/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb)) ## Performance The results obtained are the following: ``` f1 = 84.43 (against 82.50 for the base model) exact match = 72.68 (against 70.49 for the base model) ``` ## How to use the model... with Pipeline ```python import transformers from transformers import pipeline # source: https://pt.wikipedia.org/wiki/Pandemia_de_COVID-19 context = r""" A pandemia de COVID-19, também conhecida como pandemia de coronavírus, é uma pandemia em curso de COVID-19, uma doença respiratória causada pelo coronavírus da síndrome respiratória aguda grave 2 (SARS-CoV-2). O vírus tem origem zoonótica e o primeiro caso conhecido da doença remonta a dezembro de 2019 em Wuhan, na China. Em 20 de janeiro de 2020, a Organização Mundial da Saúde (OMS) classificou o surto como Emergência de Saúde Pública de Âmbito Internacional e, em 11 de março de 2020, como pandemia. Em 18 de junho de 2021, 177 349 274 casos foram confirmados em 192 países e territórios, com 3 840 181 mortes atribuídas à doença, tornando-se uma das pandemias mais mortais da história. Os sintomas de COVID-19 são altamente variáveis, variando de nenhum a doenças com risco de morte. O vírus se espalha principalmente pelo ar quando as pessoas estão perto umas das outras. Ele deixa uma pessoa infectada quando ela respira, tosse, espirra ou fala e entra em outra pessoa pela boca, nariz ou olhos. Ele também pode se espalhar através de superfícies contaminadas. As pessoas permanecem contagiosas por até duas semanas e podem espalhar o vírus mesmo se forem assintomáticas. """ model_name = 'pierreguillou/bert-large-cased-squad-v1.1-portuguese' nlp = pipeline("question-answering", model=model_name) question = "Quando começou a pandemia de Covid-19 no mundo?" result = nlp(question=question, context=context) print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}") # Answer: 'dezembro de 2019', score: 0.5087, start: 290, end: 306 ``` ## How to use the model... with the Auto classes ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese") model = AutoModelForQuestionAnswering.from_pretrained("pierreguillou/bert-large-cased-squad-v1.1-portuguese") ``` Or just clone the model repo: ```python git lfs install git clone https://huggingface.co/pierreguillou/bert-large-cased-squad-v1.1-portuguese # if you want to clone without large files – just their pointers # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 ``` ## Limitations and bias The training data used for this model come from Portuguese SQUAD. It could contain a lot of unfiltered content, which is far from neutral, and biases. ## Author Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 was trained and evaluated by [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/) thanks to the Open Source code, platforms and advices of many organizations ([link to the list](https://medium.com/@pierre_guillou/nlp-como-treinar-um-modelo-de-question-answering-em-qualquer-linguagem-baseado-no-bert-large-1c899262dd96#c2f5)). In particular: [Hugging Face](https://huggingface.co/), [Neuralmind.ai](https://neuralmind.ai/), [Deep Learning Brasil group](http://www.deeplearningbrasil.com.br/) and [AI Lab](https://ailab.unb.br/). ## Citation If you use our work, please cite: ```bibtex @inproceedings{pierreguillou2021bertlargecasedsquadv11portuguese, title={Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1}, author={Pierre Guillou}, year={2021} } ```
637c1f818c60255d4eefc56884f6741e
sd-concepts-library/cindlop
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,000
false
### cindlop on Stable Diffusion This is the `<cindlop>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cindlop> 0](https://huggingface.co/sd-concepts-library/cindlop/resolve/main/concept_images/3.jpeg) ![<cindlop> 1](https://huggingface.co/sd-concepts-library/cindlop/resolve/main/concept_images/1.jpeg) ![<cindlop> 2](https://huggingface.co/sd-concepts-library/cindlop/resolve/main/concept_images/0.jpeg) ![<cindlop> 3](https://huggingface.co/sd-concepts-library/cindlop/resolve/main/concept_images/2.jpeg)
10138bbc4bcb6337a79d8f03fc89f6e9
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-5_female-5_s474
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
498
false
# exp_w2v2r_en_vp-100k_gender_male-5_female-5_s474 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4fe246e92785fcd94d6282995a7dfff4
Helsinki-NLP/opus-mt-tvl-fi
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-tvl-fi * source languages: tvl * target languages: fi * OPUS readme: [tvl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.fi | 22.0 | 0.439 |
17ac19314251396f6ff939da797a6ce2
sd-concepts-library/museum-by-coop-himmelblau
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,165
false
### museum by coop himmelblau on Stable Diffusion This is the `<coop himmelblau museum>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<coop himmelblau museum> 0](https://huggingface.co/sd-concepts-library/museum-by-coop-himmelblau/resolve/main/concept_images/3.jpeg) ![<coop himmelblau museum> 1](https://huggingface.co/sd-concepts-library/museum-by-coop-himmelblau/resolve/main/concept_images/1.jpeg) ![<coop himmelblau museum> 2](https://huggingface.co/sd-concepts-library/museum-by-coop-himmelblau/resolve/main/concept_images/0.jpeg) ![<coop himmelblau museum> 3](https://huggingface.co/sd-concepts-library/museum-by-coop-himmelblau/resolve/main/concept_images/2.jpeg)
21ef75a2bdee8170a92792a2d7c6c580
DrishtiSharma/whisper-large-v2-punjabi-700-steps
DrishtiSharma
whisper
15
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pa']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large Punjabi - Drishti Sharma This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2211 - Wer: 24.4764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 700 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0584 | 5.79 | 700 | 0.2211 | 24.4764 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
1ea1c1ab23c27c29bf8ca3e66e543f09
sramasamy8/testModel
sramasamy8
bert
5
4
transformers
0
fill-mask
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert']
false
true
true
8,918
false
# BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> unmasker("The man worked as a [MASK].") [{'sequence': '[CLS] the man worked as a carpenter. [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': '[CLS] the man worked as a mechanic. [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> unmasker("The woman worked as a [MASK].") [{'sequence': '[CLS] the woman worked as a nurse. [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
eb567d15e58145fbf6573f91111e2f31
lmqg/mt5-base-ruquad-qg-ae
lmqg
mt5
20
90
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['ru']
['lmqg/qg_ruquad']
null
0
0
0
0
0
0
0
['question generation', 'answer extraction']
true
true
true
7,595
false
# Model Card of `lmqg/mt5-base-ruquad-qg-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** ru - **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="ru", model="lmqg/mt5-base-ruquad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, в мае 1860 года провёл серию опытов.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-ruquad-qg-ae") # answer extraction answer = pipe("generate question: Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, <hl> в мае 1860 года <hl> провёл серию опытов.") # question generation question = pipe("extract answers: <hl> в английском языке в нарицательном смысле применяется термин rapid transit (скоростной городской транспорт), однако употребляется он только тогда, когда по смыслу невозможно ограничиться названием одной конкретной системы метрополитена. <hl> в остальных случаях используются индивидуальные названия: в лондоне — london underground, в нью-йорке — new york subway, в ливерпуле — merseyrail, в вашингтоне — washington metrorail, в сан-франциско — bart и т. п. в некоторых городах применяется название метро (англ. metro) для систем, по своему характеру близких к метро, или для всего городского транспорта (собственно метро и наземный пассажирский транспорт (в том числе автобусы и трамваи)) в совокупности.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-ruquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 87.9 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_1 | 36.66 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_2 | 29.53 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_3 | 24.23 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_4 | 20.06 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | METEOR | 30.18 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | MoverScore | 66.6 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | ROUGE_L | 35.35 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-ruquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_ruquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 80.21 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | QAAlignedF1Score (MoverScore) | 57.17 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | QAAlignedPrecision (BERTScore) | 76.48 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | QAAlignedPrecision (MoverScore) | 54.4 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | QAAlignedRecall (BERTScore) | 84.49 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | QAAlignedRecall (MoverScore) | 60.55 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-ruquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_ruquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 44.44 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | AnswerF1Score | 64.31 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | BERTScore | 86.22 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_1 | 45.61 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_2 | 40.76 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_3 | 36.22 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | Bleu_4 | 31.64 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | METEOR | 38.79 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | MoverScore | 74.64 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | | ROUGE_L | 49.73 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_ruquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 32 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-ruquad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
158792df68f0c8ac54fc3e04d2fb8d52
ThomasNLG/CT0-11B
ThomasNLG
t5
13
17
transformers
0
text2text-generation
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
603
false
**How do I pronounce the name of the model?** CT0 should be pronounced "C T Zero" (like in "Continual T5 for zero-shot") # Model Description CT0 is an extention of T0, a model showing great zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. ```bibtex @misc{scialom2022Continual, title={Fine-tuned Language Models are Continual Learners}, author={Thomas Scialom and Tuhin Chakrabarty and Smaranda Muresan}, year={2022}, eprint={2205.12393}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
2efe6312b6b27ad2ae3076a9bae2e239
aipicasso/cool-japan-diffusion-2-1-1-1
aipicasso
null
21
187
diffusers
1
text-to-image
false
false
false
other
null
null
null
1
0
1
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
6,961
false
# Cool Japan Diffusion 2.1.1.1 Model Card ![アイキャッチ](eyecatch.jpg) [注意事项。中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告) English version is [here](README_en.md). # はじめに Cool Japan Diffusion はStable Diffsionをファインチューニングして、アニメやマンガ、ゲームなどのクールジャパンを表現することに特化したモデルです。なお、内閣府のクールジャパン戦略とは特に関係はありません。 # ライセンスについて ライセンスについては、もとのライセンス CreativeML Open RAIL++-M License に例外を除き商用利用禁止を追加しただけです。 例外を除き商用利用禁止を追加した理由は創作業界に悪影響を及ぼしかねないという懸念からです。 この懸念が払拭されれば、次のバージョンから元のライセンスに戻し、商用利用可能とします。 ちなみに、元のライセンスの日本語訳は[こちら](https://qiita.com/robitan/items/887d9f3153963114823d)になります。 営利企業にいる方は法務部にいる人と相談してください。 趣味で利用する方はあまり気にしなくても一般常識を守れば大丈夫なはずです。 なお、ライセンスにある通り、このモデルを改造しても、このライセンスを引き継ぐ必要があります。 # 法律や倫理について 本モデルは日本にて作成されました。したがって、日本の法律が適用されます。 本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。 また、本モデルの配布については、著作権法や刑法175条に照らしてみても、 正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。 ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。 しかし、本モデルを配布する行為が倫理的に良くないとは作者は思っています。 これは学習する著作物に対して著作者の許可を得ていないためです。 ただし、学習するには著作者の許可は法律上必要もなく、検索エンジンと同様法律上は問題はありません。 したがって、法的な側面ではなく、倫理的な側面を調査する目的も本配布は兼ねていると考えてください。 # 使い方 手軽に楽しみたい方は、こちらの[Space](https://huggingface.co/spaces/aipicasso/cool-japan-diffusion-latest-demo)をお使いください。 詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)にかかれています。 モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-1-1/resolve/main/v2-1-1-1_fp16.ckpt)からダウンロードできます。 以下、一般的なモデルカードの日本語訳です。 ## モデル詳細 - **開発者:** Robin Rombach, Patrick Esser, Alfred Increment - **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル - **言語:** 日本語 - **ライセンス:** CreativeML Open RAIL++-M-NC License - **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。 - **補足:** - **参考文献:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## モデルの使用例 Stable Diffusion v2と同じ使い方です。 たくさんの方法がありますが、2つのパターンを提供します。 - Web UI - Diffusers ### Web UIの場合 こちらの[取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)に従って作成してください。 ### Diffusersの場合 [🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。 まずは、以下のスクリプトを実行し、ライブラリをいれてください。 ```bash pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy ``` 次のスクリプトを実行し、画像を生成してください。 ```python from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler import torch model_id = "aipicasso/cool-japan-diffusion-2-1-1-1" scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "anime, masterpiece, a portrait of a girl, good pupil, 4k, detailed" negative_prompt="deformed, blurry, bad anatomy, bad pupil, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, bad hands, fused fingers, messy drawing, broken legs censor, low quality, mutated hands and fingers, long body, mutation, poorly drawn, bad eyes, ui, error, missing fingers, fused fingers, one hand with more than 5 fingers, one hand with less than 5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, long body, uncoordinated body, unnatural body, lowres, jpeg artifacts, 3d, cg, text, japanese kanji" images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=20).images images[0].save("girl.png") ``` **注意**: - [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。 - GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。 #### 想定される用途 - コンテスト - [AIアートグランプリ](https://www.aiartgrandprix.com/)への投稿 - ファインチューニングに用いた全データを開示し、審査基準を満たしていることを判断してもらうようにします。 - コンテストに向けて、要望があれば、Hugging Face の Community などで私に伝えてください。 - 画像生成AIに関する報道 - 公共放送だけでなく、営利企業でも可能 - 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。 - クールジャパンの紹介 - 他国の人にクールジャパンとはなにかを説明すること。 - 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。 - 研究開発 - Discord上でのモデルの利用 - プロンプトエンジニアリング - ファインチューニング(追加学習とも) - DreamBooth など - 他のモデルとのマージ - Latent Diffusion Modelとクールジャパンとの相性 - 本モデルの性能をFIDなどで調べること - 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること - 教育 - 美大生や専門学校生の卒業制作 - 大学生の卒業論文や課題制作 - 先生が画像生成AIの現状を伝えること - 自己表現 - SNS上で自分の感情や思考を表現すること - Hugging Face の Community にかいてある用途 - 日本語か英語で質問してください #### 想定されない用途 - 物事を事実として表現するようなこと - 収益化されているYouTubeなどのコンテンツへの使用 - 商用のサービスとして直接提供すること - 先生を困らせるようなこと - その他、創作業界に悪影響を及ぼすこと # 使用してはいけない用途や悪意のある用途 - デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ) - 特に既存のキャラクターは公開しないでください(著作権法に違反するおそれ) - なお、学習していない[キャラクターも生成できる](https://twitter.com/ThePioneerJPnew/status/1609074173892235264?s=20&t=-rY1ufzNeIDT3Fm5YdME6g)そうです。(このツイート自体は研究目的として許可しています。) - 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ) - わいせつ物を頒布しないでください (刑法175条に違反するおそれ) - いわゆる業界のマナーを守らないようなこと - 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ) - フェイクニュース ## モデルの限界やバイアス ### モデルの限界 - よくわかっていない ### バイアス Stable Diffusionと同じバイアスが掛かっています。 気をつけてください。 ## 学習 **学習データ** 次のデータを主に使ってStable Diffusionをファインチューニングしています。 - VAEについて - Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 60万種類 (データ拡張により無限枚作成) - U-Netについて - Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 180万ペア **学習プロセス** Stable DiffusionのVAEとU-Netをファインチューニングしました。 - **ハードウェア:** RTX 3090, A6000 - **オプティマイザー:** AdamW - **Gradient Accumulations**: 1 - **バッチサイズ:** 1 ## 評価結果 ## 環境への影響 ほとんどありません。 - **ハードウェアタイプ:** RTX 3090, A6000 - **使用時間(単位は時間):** 700 - **クラウド事業者:** なし - **学習した場所:** 日本 - **カーボン排出量:** そんなにない ## 参考文献 @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて、Alfred Incrementがかきました。
aa64282b2f9da11bda568ad17c8f507d
espnet/Wangyou_Zhang_wsj0_2mix_enh_dc_crn_mapping_snr_raw
espnet
null
17
22
espnet
0
audio-to-audio
false
false
false
cc-by-4.0
null
['chime4']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'audio-to-audio']
false
true
true
5,674
false
## ESPnet2 ENH model ### `espnet/Wangyou_Zhang_wsj0_2mix_enh_dc_crn_mapping_snr_raw` This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/chime4/enh1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_wsj0_2mix_enh_dc_crn_mapping_snr_raw ``` ## ENH config <details><summary>expand</summary> ``` config: conf/tuning/train_enh_dc_crn_mapping_snr.yaml print_config: false log_level: INFO dry_run: false iterator_type: chunk output_dir: exp/enh_train_enh_dc_crn_mapping_snr_raw ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 200 patience: 10 val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - si_snr - max - - valid - loss - min keep_nbest_models: 1 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 16 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/enh_stats_8k/train/speech_mix_shape - exp/enh_stats_8k/train/speech_ref1_shape - exp/enh_stats_8k/train/speech_ref2_shape valid_shape_file: - exp/enh_stats_8k/valid/speech_mix_shape - exp/enh_stats_8k/valid/speech_ref1_shape - exp/enh_stats_8k/valid/speech_ref2_shape batch_type: folded valid_batch_type: null fold_length: - 80000 - 80000 - 80000 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 32000 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr_min_8k/wav.scp - speech_mix - sound - - dump/raw/tr_min_8k/spk1.scp - speech_ref1 - sound - - dump/raw/tr_min_8k/spk2.scp - speech_ref2 - sound valid_data_path_and_name_and_type: - - dump/raw/cv_min_8k/wav.scp - speech_mix - sound - - dump/raw/cv_min_8k/spk1.scp - speech_ref1 - sound - - dump/raw/cv_min_8k/spk2.scp - speech_ref2 - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-08 weight_decay: 1.0e-07 amsgrad: true scheduler: steplr scheduler_conf: step_size: 2 gamma: 0.98 init: xavier_uniform model_conf: stft_consistency: false loss_type: mask_mse mask_type: null criterions: - name: si_snr conf: eps: 1.0e-07 wrapper: pit wrapper_conf: weight: 1.0 use_preprocessor: false encoder: stft encoder_conf: n_fft: 256 hop_length: 128 separator: dc_crn separator_conf: num_spk: 2 input_channels: - 2 - 16 - 32 - 64 - 128 - 256 enc_hid_channels: 8 enc_layers: 5 glstm_groups: 2 glstm_layers: 2 glstm_bidirectional: true glstm_rearrange: false mode: mapping decoder: stft decoder_conf: n_fft: 256 hop_length: 128 required: - output_dir version: 0.10.7a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{li2021espnetse, title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji}, booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)}, pages={785--792}, year={2021}, } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{li2021espnetse, title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration}, author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji}, year={2020}, eprint={2011.03706}, archivePrefix={arXiv}, primaryClass={eess.AS} } ```
3e2d443ba1de0c5e58de8c50c0f67565
0ys/mt5-small-finetuned-amazon-en-es
0ys
mt5
13
5
transformers
0
summarization
true
false
false
apache-2.0
null
null
null
1
1
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
1,996
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0294 - Rouge1: 16.6807 - Rouge2: 8.0004 - Rougel: 16.2251 - Rougelsum: 16.1743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.5928 | 1.0 | 1209 | 3.3005 | 14.7863 | 6.5038 | 14.3031 | 14.2522 | | 3.9024 | 2.0 | 2418 | 3.1399 | 16.9257 | 8.6583 | 16.15 | 16.1299 | | 3.5806 | 3.0 | 3627 | 3.0869 | 18.2734 | 9.1667 | 17.7441 | 17.5782 | | 3.4201 | 4.0 | 4836 | 3.0590 | 17.763 | 8.9447 | 17.1833 | 17.1661 | | 3.3202 | 5.0 | 6045 | 3.0598 | 17.7754 | 8.5695 | 17.4139 | 17.2653 | | 3.2436 | 6.0 | 7254 | 3.0409 | 16.8423 | 8.1593 | 16.5392 | 16.4297 | | 3.2079 | 7.0 | 8463 | 3.0332 | 16.8991 | 8.1574 | 16.4229 | 16.3515 | | 3.1801 | 8.0 | 9672 | 3.0294 | 16.6807 | 8.0004 | 16.2251 | 16.1743 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
ea94e152ab296dd8a284a533c141aba7
r3dhummingbird/DialoGPT-small-neku
r3dhummingbird
gpt2
9
6
transformers
0
conversational
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['conversational']
false
true
true
1,610
false
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Neku Sakuraba from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-small-neku") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-small-neku") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("NekuBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
0bae7f2c9698ed93c4ee26d5cf75c4a4
Joeythemonster/anything-midjourney-v-4-1
Joeythemonster
null
18
3,610
diffusers
32
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
1
1
0
3
3
0
['text-to-image', 'stable-diffusion']
false
true
true
635
false
### ANYTHING-MIDJOURNEY-V-4.1 Dreambooth model trained by Joeythemonster with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
4562e375bdbf89fc804f6dae4e7379fa
microsoft/beit-large-patch16-224-pt22k-ft22k
microsoft
beit
6
54,457
transformers
2
image-classification
true
false
true
apache-2.0
null
['imagenet', 'imagenet-21k']
null
0
0
0
0
0
0
0
['image-classification', 'vision']
false
true
true
5,395
false
# BEiT (large-sized model, fine-tuned on ImageNet-22k) BEiT model pre-trained in a self-supervised fashion on ImageNet-22k - also called ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on the same dataset at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-224-pt22k-ft22k') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-224-pt22k-ft22k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 21,841 ImageNet-22k classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on the same dataset. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
bc69c81485e2e42195034acc2505ba4c
imdanboy/ljspeech_tts_train_jets_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
imdanboy
null
36
38
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['en']
['ljspeech']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
11,604
false
## ESPnet2 TTS model ### `imdanboy/ljspeech_tts_train_jets_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave` This model was trained by imdanboy using ljspeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout c173c30930631731e6836c274a591ad571749741 pip install -e . cd egs2/ljspeech/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model imdanboy/ljspeech_tts_train_jets_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_jets.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_jets_raw_phn_tacotron_g2p_en_no_space ngpu: 1 seed: 777 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 39471 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false collect_stats: false write_collected_feats: false max_epoch: 1000 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - text2mel_loss - min - - train - text2mel_loss - min - - train - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: -1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 50 use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 1000 batch_size: 20 valid_batch_size: null batch_bins: 3000000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/text_shape.phn - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/speech_shape valid_shape_file: - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/text_shape.phn - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr_no_dev/text - text - text - - dump/raw/tr_no_dev/wav.scp - speech - sound - - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/collect_feats/pitch.scp - pitch - npy - - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/collect_feats/energy.scp - energy - npy valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - dump/raw/dev/wav.scp - speech - sound - - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/collect_feats/pitch.scp - pitch - npy - - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/collect_feats/energy.scp - energy - npy allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adamw optim_conf: lr: 0.0002 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0002 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: true token_list: - <blank> - <unk> - AH0 - N - T - D - S - R - L - DH - K - Z - IH1 - IH0 - M - EH1 - W - P - AE1 - AH1 - V - ER0 - F - ',' - AA1 - B - HH - IY1 - UW1 - IY0 - AO1 - EY1 - AY1 - . - OW1 - SH - NG - G - ER1 - CH - JH - Y - AW1 - TH - UH1 - EH2 - OW0 - EY2 - AO0 - IH2 - AE2 - AY2 - AA2 - UW0 - EH0 - OY1 - EY0 - AO2 - ZH - OW2 - AE0 - UW2 - AH2 - AY0 - IY2 - AW2 - AA0 - '''' - ER2 - UH2 - '?' - OY2 - '!' - AW0 - UH0 - OY0 - .. - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: tacotron g2p: g2p_en_no_space feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/feats_stats.npz tts: jets tts_conf: generator_type: jets_generator generator_params: adim: 256 aheads: 2 elayers: 4 eunits: 1024 dlayers: 4 dunits: 1024 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 use_masking: true encoder_normalize_before: true decoder_normalize_before: true encoder_type: transformer decoder_type: transformer conformer_rel_pos_type: latest conformer_pos_enc_layer_type: rel_pos conformer_self_attn_layer_type: rel_selfattn conformer_activation_type: swish use_macaron_style_in_conformer: true use_cnn_in_conformer: true conformer_enc_kernel_size: 7 conformer_dec_kernel_size: 31 init_type: xavier_uniform transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false generator_out_channels: 1 generator_channels: 512 generator_global_channels: -1 generator_kernel_size: 7 generator_upsample_scales: - 8 - 8 - 2 - 2 generator_upsample_kernel_sizes: - 16 - 16 - 4 - 4 generator_resblock_kernel_sizes: - 3 - 7 - 11 generator_resblock_dilations: - - 1 - 3 - 5 - - 1 - 3 - 5 - - 1 - 3 - 5 generator_use_additional_convs: true generator_bias: true generator_nonlinear_activation: LeakyReLU generator_nonlinear_activation_params: negative_slope: 0.1 generator_use_weight_norm: true segment_size: 64 idim: 78 odim: 80 discriminator_type: hifigan_multi_scale_multi_period_discriminator discriminator_params: scales: 1 scale_downsample_pooling: AvgPool1d scale_downsample_pooling_params: kernel_size: 4 stride: 2 padding: 2 scale_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 15 - 41 - 5 - 3 channels: 128 max_downsample_channels: 1024 max_groups: 16 bias: true downsample_scales: - 2 - 2 - 4 - 4 - 1 nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false follow_official_norm: false periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true mel_loss_params: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null window: hann n_mels: 80 fmin: 0 fmax: null log_base: null lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 lambda_var: 1.0 lambda_align: 2.0 sampling_rate: 22050 cache_generator_outputs: true pitch_extract: dio pitch_extract_conf: reduction_factor: 1 use_token_averaged_f0: false fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/pitch_stats.npz energy_extract: energy energy_extract_conf: reduction_factor: 1 use_token_averaged_energy: false fs: 22050 n_fft: 1024 hop_length: 256 win_length: null energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/energy_stats.npz required: - output_dir - token_list version: '202204' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
e6ae7b127094a2dcfa6b27ee8f1e2d60
Subitha/roberta-squad
Subitha
roberta
17
11
transformers
0
question-answering
true
false
false
cc-by-4.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
930
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-squad This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
9a268fccd29af776d9b8f14c3d507dfb
surajp/RoBERTa-hindi-guj-san
surajp
roberta
9
15
transformers
1
fill-mask
true
false
true
mit
['hi', 'sa', 'gu']
['Wikipedia (Hindi, Sanskrit, Gujarati)']
null
0
0
0
0
0
0
0
['Indic']
false
true
true
3,278
false
# RoBERTa-hindi-guj-san ## Model description Multillingual RoBERTa like model trained on Wikipedia articles of Hindi, Sanskrit, Gujarati languages. The tokenizer was trained on combined text. However, Hindi text was used to pre-train the model and then it was fine-tuned on Sanskrit and Gujarati Text combined hoping that pre-training with Hindi will help the model learn similar languages. ### Configuration | Parameter | Value | |---|---| | `hidden_size` | 768 | | `num_attention_heads` | 12 | | `num_hidden_layers` | 6 | | `vocab_size` | 30522 | |`model_type`|`roberta`| ## Intended uses & limitations #### How to use ```python # Example usage from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("surajp/RoBERTa-hindi-guj-san") model = AutoModelWithLMHead.from_pretrained("surajp/RoBERTa-hindi-guj-san") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) # Sanskrit: इयं भाषा न केवलं भारतस्य अपि तु विश्वस्य प्राचीनतमा भाषा इति मन्यते। # Hindi: अगर आप अब अभ्यास नहीं करते हो तो आप अपने परीक्षा में मूर्खतापूर्ण गलतियाँ करोगे। # Gujarati: ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો. fill_mask("ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો <mask> હતો.") ''' Output: -------- [ {'score': 0.07849744707345963, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો જ હતો.</s>', 'token': 390}, {'score': 0.06273336708545685, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો ન હતો.</s>', 'token': 478}, {'score': 0.05160355195403099, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો થઇ હતો.</s>', 'token': 2075}, {'score': 0.04751499369740486, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો એક હતો.</s>', 'token': 600}, {'score': 0.03788900747895241, 'sequence': '<s> ગુજરાતમાં ૧૯મી માર્ચ સુધી કોઈ સકારાત્મક (પોઝીટીવ) રીપોર્ટ આવ્યો પણ હતો.</s>', 'token': 840} ] ``` ## Training data Cleaned wikipedia articles in Hindi, Sanskrit and Gujarati on Kaggle. It contains training as well as evaluation text. Used in [iNLTK](https://github.com/goru001/inltk) - [Hindi](https://www.kaggle.com/disisbig/hindi-wikipedia-articles-172k) - [Gujarati](https://www.kaggle.com/disisbig/gujarati-wikipedia-articles) - [Sanskrit](https://www.kaggle.com/disisbig/sanskrit-wikipedia-articles) ## Training procedure - On TPU (using `xla_spawn.py`) - For language modelling - Iteratively increasing `--block_size` from 128 to 256 over epochs - Tokenizer trained on combined text - Pre-training with Hindi and fine-tuning on Sanskrit and Gujarati texts ``` --model_type distillroberta-base \ --model_name_or_path "/content/SanHiGujBERTa" \ --mlm_probability 0.20 \ --line_by_line \ --save_total_limit 2 \ --per_device_train_batch_size 128 \ --per_device_eval_batch_size 128 \ --num_train_epochs 5 \ --block_size 256 \ --seed 108 \ --overwrite_output_dir \ ``` ## Eval results perplexity = 2.920005983224673 > Created by [Suraj Parmar/@parmarsuraj99](https://twitter.com/parmarsuraj99) | [LinkedIn](https://www.linkedin.com/in/parmarsuraj99/) > Made with <span style="color: #e25555;">&hearts;</span> in India
2979f2144d13a3699111a1a82e952165
ziyu600601/etreyrt
ziyu600601
null
16
8
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image', 'image-to-image', 'diffusers']
false
true
true
4,567
false
# Diffusion model This model is trained with high quality and detailed anime images. ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run EimisAnimeDiffusion_1.0v: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/EimisAnimeDiffusion_1.0v) # Sample generations This model works well on anime and landscape generations.<br> Anime:<br> There are some sample generations:<br> ``` Positive:a girl, Phoenix girl, fluffy hair, war, a hell on earth, Beautiful and detailed explosion, Cold machine, Fire in eyes, burning, Metal texture, Exquisite cloth, Metal carving, volume, best quality, normal hands, Metal details, Metal scratch, Metal defects, masterpiece, best quality, best quality, illustration, highres, masterpiece, contour deepening, illustration,(beautiful detailed girl),beautiful detailed glow Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 4186044705/4186044707, Size: 704x896 ``` <img src=https://imgur.com/2U295w3.png width=75% height=75%> <img src=https://imgur.com/2jtF376.png width=75% height=75%> ``` Positive:(1girl), cute, walking in the park, (night), full moon, north star, blue shirt, red skirt, detailed shirt, jewelry, autumn, dark blue hair, shirt hair, (magic:1.5), beautiful blue eyes Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 35, Sampler: Euler a, CFG scale: 9, Seed: 296195494, Size: 768x960 ``` <img src=https://imgur.com/gudKxQe.png width=75% height=75%> ``` Positive:night , ((1 girl)), alone, masterpiece, 8k wallpaper, highres, absurdres, high quality background, short hair, black hair, multicolor hair, beautiful frozen village, (full bright moon), blue dress, detailed dress, jewelry dress, (magic:1.2), blue fire, blue eyes, glowing eyes, fire, ice goddess, (blue detailed beautiful crown), electricity, blue electricity, blue light particles Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9, Seed: 2118767319, Size: 768x832 ``` <img src=https://imgur.com/lJL4CJL.png width=75% height=75%> Want to generate some amazing backgrounds? No problem: ``` Positive: above clouds, mountains, (night), full moon, castle, huge forest, forest between mountains, beautiful, masterpiece Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls)) Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 9, Seed: 83644543, Size: 896x640 ``` <img src=https://imgur.com/XfxAx0S.png width=75% height=75%> ## Disclaimer Some prompts might not work perfectly (mainly colors), so add some more prompts for it to work, or try these -->(). Usually they help. Also works well with img2img if you want to add detail. ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
30d37d5fd99c088e055ba2a16ca5c660
adasgaleus/insertion-prop-015-correct-data
adasgaleus
distilbert
12
16
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,546
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # insertion-prop-015-correct-data This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0497 - Precision: 0.8907 - Recall: 0.8518 - F1: 0.8708 - Accuracy: 0.9816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0978 | 0.32 | 500 | 0.0581 | 0.8730 | 0.8300 | 0.8509 | 0.9787 | | 0.0633 | 0.64 | 1000 | 0.0515 | 0.8867 | 0.8447 | 0.8652 | 0.9807 | | 0.0588 | 0.96 | 1500 | 0.0497 | 0.8907 | 0.8518 | 0.8708 | 0.9816 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
5c573e0023f7ae80b423fc426bbbb6cd
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10
anas-awadalla
bert
16
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,000
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
a7c679ae8d72eb68ab05dd18c19d1ace
amanm27/bert-base-uncased-wiki-sports
amanm27
bert
9
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,277
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-wiki-sports This model is a fine-tuned version of [amanm27/bert-base-uncased-wiki](https://huggingface.co/amanm27/bert-base-uncased-wiki) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3589 | 1.0 | 912 | 2.0686 | | 2.176 | 2.0 | 1824 | 2.0025 | | 2.1022 | 3.0 | 2736 | 1.9774 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0 - Datasets 1.18.3 - Tokenizers 0.11.0
01730e4519c50239129ec713052cd4ed
sammy786/wav2vec2-xlsr-tatar
sammy786
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['tt']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'tt']
true
true
true
5,208
false
# sammy786/wav2vec2-xlsr-tatar This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - tt dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets): - Loss: 7.66 - Wer: 7.08 ## Model description "facebook/wav2vec2-xls-r-1b" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Finnish train.tsv, dev.tsv and other.tsv ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000045637994662983496 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |-------|---------------|-----------------|----------| | 200 | 4.849400 | 1.874908 | 0.995232 | | 400 | 1.105700 | 0.257292 | 0.367658 | | 600 | 0.723000 | 0.181150 | 0.250513 | | 800 | 0.660600 | 0.167009 | 0.226078 | | 1000 | 0.568000 | 0.135090 | 0.177339 | | 1200 | 0.721200 | 0.117469 | 0.166413 | | 1400 | 0.416300 | 0.115142 | 0.153765 | | 1600 | 0.346000 | 0.105782 | 0.153963 | | 1800 | 0.279700 | 0.102452 | 0.146149 | | 2000 | 0.273800 | 0.095818 | 0.128468 | | 2200 | 0.252900 | 0.102302 | 0.133766 | | 2400 | 0.255100 | 0.096592 | 0.121316 | | 2600 | 0.229600 | 0.091263 | 0.124561 | | 2800 | 0.213900 | 0.097748 | 0.125687 | | 3000 | 0.210700 | 0.091244 | 0.125422 | | 3200 | 0.202600 | 0.084076 | 0.106284 | | 3400 | 0.200900 | 0.093809 | 0.113238 | | 3600 | 0.192700 | 0.082918 | 0.108139 | | 3800 | 0.182000 | 0.084487 | 0.103371 | | 4000 | 0.167700 | 0.091847 | 0.104960 | | 4200 | 0.183700 | 0.085223 | 0.103040 | | 4400 | 0.174400 | 0.083862 | 0.100589 | | 4600 | 0.163100 | 0.086493 | 0.099728 | | 4800 | 0.162000 | 0.081734 | 0.097543 | | 5000 | 0.153600 | 0.077223 | 0.092974 | | 5200 | 0.153700 | 0.086217 | 0.090789 | | 5400 | 0.140200 | 0.093256 | 0.100457 | | 5600 | 0.142900 | 0.086903 | 0.097742 | | 5800 | 0.131400 | 0.083068 | 0.095225 | | 6000 | 0.126000 | 0.086642 | 0.091252 | | 6200 | 0.135300 | 0.083387 | 0.091186 | | 6400 | 0.126100 | 0.076479 | 0.086352 | | 6600 | 0.127100 | 0.077868 | 0.086153 | | 6800 | 0.118000 | 0.083878 | 0.087676 | | 7000 | 0.117600 | 0.085779 | 0.091054 | | 7200 | 0.113600 | 0.084197 | 0.084233 | | 7400 | 0.112000 | 0.078688 | 0.081319 | | 7600 | 0.110200 | 0.082534 | 0.086087 | | 7800 | 0.106400 | 0.077245 | 0.080988 | | 8000 | 0.102300 | 0.077497 | 0.079332 | | 8200 | 0.109500 | 0.079083 | 0.088339 | | 8400 | 0.095900 | 0.079721 | 0.077809 | | 8600 | 0.094700 | 0.079078 | 0.079730 | | 8800 | 0.097400 | 0.078785 | 0.079200 | | 9000 | 0.093200 | 0.077445 | 0.077015 | | 9200 | 0.088700 | 0.078207 | 0.076617 | | 9400 | 0.087200 | 0.078982 | 0.076485 | | 9600 | 0.089900 | 0.081209 | 0.076021 | | 9800 | 0.081900 | 0.078158 | 0.075757 | | 10000 | 0.080200 | 0.078074 | 0.074498 | | 10200 | 0.085000 | 0.078830 | 0.073373 | | 10400 | 0.080400 | 0.078144 | 0.073373 | | 10600 | 0.078200 | 0.077163 | 0.073902 | | 10800 | 0.080900 | 0.076394 | 0.072446 | | 11000 | 0.080700 | 0.075955 | 0.071585 | | 11200 | 0.076800 | 0.077031 | 0.072313 | | 11400 | 0.076300 | 0.077401 | 0.072777 | | 11600 | 0.076700 | 0.076613 | 0.071916 | | 11800 | 0.076000 | 0.076672 | 0.071916 | | 12000 | 0.077200 | 0.076490 | 0.070989 | | 12200 | 0.076200 | 0.076688 | 0.070856 | | 12400 | 0.074400 | 0.076780 | 0.071055 | | 12600 | 0.076300 | 0.076768 | 0.071320 | | 12800 | 0.077600 | 0.076727 | 0.071055 | | 13000 | 0.077700 | 0.076714 | 0.071254 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id sammy786/wav2vec2-xlsr-tatar --dataset mozilla-foundation/common_voice_8_0 --config tt --split test ```
b1dbbfa11f379df67a6adff22ea28bb4
wyu1/GenRead-3B-WebQ
wyu1
t5
5
3
transformers
0
null
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
719
false
# GenRead: FiD model trained on WebQ -- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the WebQ dataset [1]. -- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 11500 steps. References: [1] Semantic parsing on freebase from question-answer pairs. EMNLP 2013. [2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022 ## Model performance We evaluate it on the WebQ dataset, the EM score is 54.36. <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> --- license: cc-by-4.0 --- --- license: cc-by-4.0 ---
f32872a0fbf1d7f233883f9000414193
speechbrain/asr-transformer-aishell
speechbrain
null
8
129
speechbrain
4
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['aishell']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'CTC', 'Attention', 'Transformers', 'pytorch', 'speechbrain']
false
true
true
4,175
false
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Transformer for AISHELL (Mandarin Chinese) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on AISHELL (Mandarin Chinese) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Dev CER | Test CER | GPUs | Full Results | |:-------------:|:--------------:|:--------------:|:--------:|:--------:| | 05-03-21 | 5.60 | 6.04 | 2xV100 32GB | [Google Drive](https://drive.google.com/drive/folders/1zlTBib0XEwWeyhaXDXnkqtPsIBI18Uzs?usp=sharing)| ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Acoustic model made of a transformer encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. To Train this system from scratch, [see our SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/develop/recipes/AISHELL-1). The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-aishell", savedir="pretrained_models/asr-transformer-aishell") asr_model.transcribe_file("speechbrain/asr-transformer-aishell/example_mandarin.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ## Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model. ### Training The model was trained with SpeechBrain (Commit hash: '986a2175'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/AISHELL-1/ASR/transformer/ python train.py hparams/train_ASR_transformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1QU18YoauzLOXueogspT0CgR5bqJ6zFfu?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
dde46b8c75eff4609aeba85bbd0dc57a
nc33/finetune_rte_model
nc33
deberta-v2
17
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,337
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune_rte_model This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5582 - Accuracy: 0.8195 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.5364 | 0.7617 | | No log | 2.0 | 312 | 0.4650 | 0.8195 | | No log | 3.0 | 468 | 0.5582 | 0.8195 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
075bbb814f6b0f5c5d05224edb8f7dff
Helsinki-NLP/opus-mt-tvl-fr
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-tvl-fr * source languages: tvl * target languages: fr * OPUS readme: [tvl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.fr | 24.0 | 0.410 |
1745c34b8fde69fef36726e053e7d865
jonatasgrosman/exp_w2v2t_fa_no-pretraining_s650
jonatasgrosman
wav2vec2
10
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fa']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fa']
false
true
true
414
false
# exp_w2v2t_fa_no-pretraining_s650 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d7528e7e6f9a583a699523b447c5d346
Helsinki-NLP/opus-mt-ja-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
false
### opus-mt-ja-sv * source languages: ja * target languages: sv * OPUS readme: [ja-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.sv | 26.1 | 0.445 |
62ade78b1b3342241770f993c49f5625
Habana/t5
Habana
null
3
1,446
null
0
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,424
false
[Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks. Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana). ## T5 model HPU configuration This model only contains the `GaudiConfig` file for running the [T5](https://huggingface.co/t5-base) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP) - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation - `hmp_bf16_ops`: list of operators that should run in bf16 - `hmp_fp32_ops`: list of operators that should run in fp32 - `hmp_is_verbose`: verbosity - `use_fused_adam`: whether to use Habana's custom AdamW implementation - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator ## Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/summarization/run_summarization.py) is a summarization example script to fine-tune a model. You can run it with T5-small with the following command: ```bash python run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --overwrite_output_dir \ --predict_with_generate \ --use_habana \ --use_lazy_mode \ --gaudi_config_name Habana/t5 \ --ignore_pad_token_for_loss False \ --pad_to_max_length \ --save_strategy epoch \ --throughput_warmup_steps 2 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
f1913ef0b0c067b72548fcb035be5f10
mqy/mt5-small-finetuned-19jan-7
mqy
mt5
13
6
transformers
0
summarization
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
6,651
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-19jan-7 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6123 - Rouge1: 6.8298 - Rouge2: 0.1667 - Rougel: 6.5947 - Rougelsum: 6.6685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 16.2953 | 1.0 | 50 | 5.4420 | 2.3065 | 0.0 | 2.3217 | 2.3089 | | 10.6895 | 2.0 | 100 | 4.4691 | 3.2975 | 0.3693 | 3.2976 | 3.3376 | | 7.0377 | 3.0 | 150 | 3.2638 | 4.1896 | 0.3485 | 4.1487 | 4.1878 | | 5.7221 | 4.0 | 200 | 3.0772 | 6.2012 | 0.7955 | 6.1846 | 6.3083 | | 4.9356 | 5.0 | 250 | 3.0312 | 5.2032 | 0.8545 | 5.1829 | 5.2263 | | 4.4656 | 6.0 | 300 | 3.0022 | 5.6901 | 1.3505 | 5.6184 | 5.6791 | | 4.2279 | 7.0 | 350 | 2.9585 | 5.6907 | 1.5424 | 5.644 | 5.7768 | | 4.0578 | 8.0 | 400 | 2.9098 | 5.7425 | 1.0202 | 5.6452 | 5.7881 | | 3.9236 | 9.0 | 450 | 2.8686 | 6.2001 | 1.1793 | 6.1891 | 6.2508 | | 3.8237 | 10.0 | 500 | 2.8222 | 5.9182 | 1.1793 | 5.8436 | 5.9807 | | 3.7078 | 11.0 | 550 | 2.7890 | 5.4733 | 1.3896 | 5.3702 | 5.4957 | | 3.641 | 12.0 | 600 | 2.7522 | 5.8312 | 1.1793 | 5.784 | 5.9037 | | 3.5527 | 13.0 | 650 | 2.7168 | 6.3129 | 1.1793 | 6.2924 | 6.384 | | 3.5281 | 14.0 | 700 | 2.7000 | 9.1787 | 0.8333 | 9.1491 | 9.2241 | | 3.4547 | 15.0 | 750 | 2.6966 | 7.8778 | 0.3333 | 7.8306 | 7.9167 | | 3.4386 | 16.0 | 800 | 2.6892 | 8.3907 | 0.3333 | 8.3167 | 8.4 | | 3.3749 | 17.0 | 850 | 2.6786 | 8.6167 | 0.4167 | 8.5917 | 8.5787 | | 3.3681 | 18.0 | 900 | 2.6895 | 8.2466 | 0.4167 | 8.1799 | 8.2407 | | 3.3173 | 19.0 | 950 | 2.6957 | 8.1742 | 0.4167 | 8.1197 | 8.1429 | | 3.3034 | 20.0 | 1000 | 2.6721 | 8.2466 | 0.4167 | 8.1799 | 8.2407 | | 3.2594 | 21.0 | 1050 | 2.6698 | 8.569 | 0.4167 | 8.5419 | 8.619 | | 3.2138 | 22.0 | 1100 | 2.6676 | 8.2722 | 0.4167 | 8.2343 | 8.3037 | | 3.2239 | 23.0 | 1150 | 2.6537 | 8.1444 | 0.4167 | 8.1051 | 8.1301 | | 3.1887 | 24.0 | 1200 | 2.6529 | 8.1444 | 0.4167 | 8.1051 | 8.1301 | | 3.1641 | 25.0 | 1250 | 2.6685 | 7.7777 | 0.1667 | 7.7204 | 7.8143 | | 3.162 | 26.0 | 1300 | 2.6619 | 8.3776 | 0.3333 | 8.4135 | 8.4692 | | 3.1114 | 27.0 | 1350 | 2.6632 | 8.3776 | 0.3333 | 8.4135 | 8.4692 | | 3.0645 | 28.0 | 1400 | 2.6438 | 7.8811 | 0.3333 | 7.8333 | 7.9484 | | 3.0984 | 29.0 | 1450 | 2.6384 | 7.3936 | 0.1667 | 7.3609 | 7.4051 | | 3.0712 | 30.0 | 1500 | 2.6389 | 6.9609 | 0.1667 | 6.875 | 7.0253 | | 3.0662 | 31.0 | 1550 | 2.6346 | 7.95 | 0.1667 | 7.9051 | 8.0218 | | 3.0294 | 32.0 | 1600 | 2.6420 | 7.3936 | 0.1667 | 7.3609 | 7.4051 | | 3.0143 | 33.0 | 1650 | 2.6325 | 7.6526 | 0.1667 | 7.6869 | 7.7551 | | 3.002 | 34.0 | 1700 | 2.6384 | 7.9436 | 0.1667 | 7.9317 | 8.016 | | 2.9964 | 35.0 | 1750 | 2.6262 | 8.2958 | 0.4167 | 8.2317 | 8.3936 | | 2.9893 | 36.0 | 1800 | 2.6351 | 8.6535 | 0.1667 | 8.616 | 8.7333 | | 2.9862 | 37.0 | 1850 | 2.6320 | 8.2452 | 0.1667 | 8.2 | 8.3218 | | 2.9588 | 38.0 | 1900 | 2.6214 | 7.6656 | 0.1667 | 7.6819 | 7.7 | | 2.9697 | 39.0 | 1950 | 2.6229 | 7.1452 | 0.1667 | 7.1051 | 7.1942 | | 2.9433 | 40.0 | 2000 | 2.6209 | 7.5775 | 0.4167 | 7.4893 | 7.5833 | | 2.9306 | 41.0 | 2050 | 2.6197 | 7.525 | 0.4167 | 7.4435 | 7.5351 | | 2.9382 | 42.0 | 2100 | 2.6190 | 7.525 | 0.4167 | 7.4435 | 7.5351 | | 2.9269 | 43.0 | 2150 | 2.6234 | 7.3614 | 0.4167 | 7.2092 | 7.3592 | | 2.9152 | 44.0 | 2200 | 2.6237 | 6.9976 | 0.1667 | 6.8777 | 7.0333 | | 2.9137 | 45.0 | 2250 | 2.6213 | 6.9976 | 0.1667 | 6.8777 | 7.0333 | | 2.9011 | 46.0 | 2300 | 2.6212 | 6.9976 | 0.1667 | 6.8777 | 7.0333 | | 2.8941 | 47.0 | 2350 | 2.6188 | 6.7768 | 0.1667 | 6.6509 | 6.812 | | 2.9143 | 48.0 | 2400 | 2.6126 | 7.0875 | 0.1667 | 6.803 | 6.9337 | | 2.8798 | 49.0 | 2450 | 2.6207 | 6.4458 | 0.1667 | 6.3221 | 6.4527 | | 2.8701 | 50.0 | 2500 | 2.6172 | 6.7542 | 0.1667 | 6.4857 | 6.5729 | | 2.8823 | 51.0 | 2550 | 2.6161 | 6.9971 | 0.1667 | 6.6819 | 6.7968 | | 2.8724 | 52.0 | 2600 | 2.6171 | 6.8298 | 0.1667 | 6.5947 | 6.6685 | | 2.8635 | 53.0 | 2650 | 2.6176 | 6.8298 | 0.1667 | 6.5947 | 6.6685 | | 2.8803 | 54.0 | 2700 | 2.6134 | 6.1417 | 0.1667 | 5.929 | 6.0423 | | 2.8608 | 55.0 | 2750 | 2.6118 | 6.4953 | 0.1667 | 6.2113 | 6.3554 | | 2.8655 | 56.0 | 2800 | 2.6125 | 6.4976 | 0.1667 | 6.2625 | 6.3539 | | 2.856 | 57.0 | 2850 | 2.6136 | 6.8298 | 0.1667 | 6.5947 | 6.6685 | | 2.8837 | 58.0 | 2900 | 2.6124 | 6.8298 | 0.1667 | 6.5947 | 6.6685 | | 2.8871 | 59.0 | 2950 | 2.6123 | 6.8298 | 0.1667 | 6.5947 | 6.6685 | | 2.8537 | 60.0 | 3000 | 2.6123 | 6.8298 | 0.1667 | 6.5947 | 6.6685 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
4dbf8b044d5e11957f0efbbbef5038ba
Vibharkchauhan/distilbert-base-uncased-finetuned-ner
Vibharkchauhan
distilbert
16
7
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,464
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Precision: 0.9193 - Recall: 0.9311 - F1: 0.9251 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2393 | 1.0 | 878 | 0.0732 | 0.9052 | 0.9207 | 0.9129 | 0.9801 | | 0.0569 | 2.0 | 1756 | 0.0626 | 0.9193 | 0.9311 | 0.9251 | 0.9824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
ffedab754f00571a638f1ade52413a51
akshaychaudhary/distilbert-base-uncased-finetuned-cloud-ner
akshaychaudhary
distilbert
13
15
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,557
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cloud-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0812 - Precision: 0.8975 - Recall: 0.9080 - F1: 0.9027 - Accuracy: 0.9703 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 166 | 0.1326 | 0.7990 | 0.8043 | 0.8017 | 0.9338 | | No log | 2.0 | 332 | 0.0925 | 0.8770 | 0.8946 | 0.8858 | 0.9618 | | No log | 3.0 | 498 | 0.0812 | 0.8975 | 0.9080 | 0.9027 | 0.9703 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
b41881b52feda70b812ae7eecad0b16f
viktor-ogay/finetuning-sentiment-model-3000-samples
viktor-ogay
distilbert
16
11
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,053
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3206 - Accuracy: 0.87 - F1: 0.8704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
548052ddb5c006e34c76d8e0722d6cab
spasis/bert-finetuned-ner
spasis
bert
10
13
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,515
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0569 - Precision: 0.9215 - Recall: 0.9423 - F1: 0.9318 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 439 | 0.0702 | 0.8847 | 0.9170 | 0.9006 | 0.9795 | | 0.183 | 2.0 | 878 | 0.0599 | 0.9161 | 0.9391 | 0.9274 | 0.9842 | | 0.0484 | 3.0 | 1317 | 0.0569 | 0.9215 | 0.9423 | 0.9318 | 0.9850 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
aa340ec9010e218399d20d61d6d0248c
ramybaly/ner_nerd_fine
ramybaly
bert
25
7
transformers
0
token-classification
true
false
false
apache-2.0
null
['nerd']
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,737
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner_nerd_fine This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the nerd dataset. It achieves the following results on the evaluation set: - Loss: 0.3373 - Precision: 0.6326 - Recall: 0.6734 - F1: 0.6524 - Accuracy: 0.9050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.6219 | 1.0 | 8235 | 0.3347 | 0.6066 | 0.6581 | 0.6313 | 0.9015 | | 0.3071 | 2.0 | 16470 | 0.3165 | 0.6349 | 0.6637 | 0.6490 | 0.9060 | | 0.2384 | 3.0 | 24705 | 0.3311 | 0.6373 | 0.6769 | 0.6565 | 0.9068 | | 0.1834 | 4.0 | 32940 | 0.3414 | 0.6349 | 0.6780 | 0.6557 | 0.9069 | | 0.1392 | 5.0 | 41175 | 0.3793 | 0.6334 | 0.6775 | 0.6547 | 0.9068 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.2
032e2b8bae2d661b4c9c9f172186a93e
Lariatty/joopich
Lariatty
null
19
8
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
723
false
### joopich Dreambooth model trained by Lariatty with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Lariatty/joopich/resolve/main/sample_images/00044-3586949194-joopich_starr.png)
c5c0b54587b55d14714c34047d0b2dd0
kerkathy/distilbert-base-uncased-finetuned-imdb
kerkathy
distilbert
9
6
transformers
0
fill-mask
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
4188d1f8d2336af8811343ee16b698b6
YoungMasterFromSect/Chibi
YoungMasterFromSect
null
10
0
null
10
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
592
false
Sample images: <style> img { display: inline-block; } </style> <img src="https://huggingface.co/YoungMasterFromSect/Chibi/resolve/main/1.png" width="300" height="200"> <img src="https://huggingface.co/YoungMasterFromSect/Chibi/resolve/main/2.png" width="300" height="200"> <img src="https://huggingface.co/YoungMasterFromSect/Chibi/resolve/main/3.png" width="300" height="300"> <img src="https://huggingface.co/YoungMasterFromSect/Chibi/resolve/main/4.png" width="300" height="300"> <img src="https://huggingface.co/YoungMasterFromSect/Chibi/resolve/main/5.png" width="300" height="300">
4a7d263fcbe5ea51292a745c8e8b2bd8
Lexemo/roberta_large_legal_act_extraction
Lexemo
roberta
9
10
transformers
1
token-classification
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
6,797
false
# Legal act Extraction Model With growing legal complexity keeping track of changes in interconnectivity and hierarchical structure of the legislation is a challenging task. Entity extraction technique (also known as token classification) facilitates document analysis by assigning a label to each word in a text. A way to decide which data elements are to be extracted and how they should be labeled mostly depends on a particular business problem and is limited only by a tokenization process meaning that an element shouldn’t be less than a token as split by a tokenizer. So as long as these data elements correspond to at least one whole token they could represent legal terms, legal entities, legal parties, deadlines and so on. This model is fine-tuned to label mentioned legal acts and their articles. Extracted information could be used to create an interconnectivity map for legal acts. ## Model Description This model is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large). More details about RoBERTa large are available in [RoBERTa large model card](https://huggingface.co/roberta-large). | Id | Label | Description | | -------- | ------------------------------------------ | ----------------------------------------------------------------------- | | 0 | O | Not a legal act and not an article | | 1 | abbreviation_relevant_following_act | A legal act abbreviation relevant to the following legal act | | 2 | abbreviation_relevant_previous_act | A legal act abbreviation relevant to a previously mentioned legal act | | 3 | another_act | A legal act | | 4 | another_act_abbreviation | A legal act mentioned as an abbreviation | | 5 | another_act_equal_previous_act | An assumed legal act introduced previously | | 6 | another_act_sequence_end | Inside a sequence of legal acts | | 7 | another_act_sequence_start | At the beginning of a sequence of legal acts | | 8 | another_article_equal_previous_article | An assumed article introduced previously | | 9 | article_current | An article mentioning itself | | 10 | article_relevant_current_act | An article of the same legal act as the one being processed | | 11 | article_relevant_current_act_range_end | A range end of articles belonging to the current act | | 12 | article_relevant_current_act_range_start | A range start of articles belonging to the current act | | 13 | article_relevant_following_act | An article of a following legal act | | 15 | article_relevant_following_act_range_end | A range end of articles belonging to a following act | | 16 | article_relevant_following_act_range_start | A range start of articles belonging to a following legal act | | 17 | article_relevant_previous_act | An article of a previously mentioned legal act | | 18 | article_relevant_previous_act_range_end | A range end of articles belonging to a previously mentioned legal act | | 19 | article_relevant_previous_act_range_start | A range start of articles belonging to a previously mentioned legal act | | 20 | current_act | A legal act mentioning itself | | 21 | treaty_abbreviation | A treaty mentioned as an abbreviation | | 22 | treaty_name | A treaty | | 23 | service_label | A token comprising more than 1 label | ## Intended Uses & Limitations The model could be used to extract mentioned legal acts and their articles. ### Limitations This legal-act extraction model is very domain-specific and will perform well on legal texts. It's not recommended to use this model for other domains, but you are free to test it out. It was intended for English documents only. ### How To Use ```python from transformers import ( TokenClassificationPipeline, RobertaForTokenClassification, RobertaTokenizerFast, ) legal_act_extraction_model = RobertaForTokenClassification.from_pretrained( 'Lexemo/roberta_large_legal_act_extraction') tokenizer = RobertaTokenizerFast.from_pretrained("roberta-large") pypeline = TokenClassificationPipeline(model=legal_act_extraction_model, tokenizer=tokenizer, aggregation_strategy='simple') ``` ```python # Inference import pandas as pd from tabulate import tabulate text = """When Member States adopt those measures, they shall contain a reference to this Directive or be accompanied by such reference on the occasion of their official publication. They shall also include a statement that references in existing laws, regulations and administrative provisions to Article 9 of Directive 97/23/EC shall be construed as references to Article 13 of this Directive. Member States shall determine how such reference is to be made and how that statement is to be formulated.""" entities = pypeline(text) df = pd.DataFrame(entities) print(tabulate(df, showindex=True, headers=df.columns)) ``` ``` # Output entity_group score word start end -- ------------------------------ -------- ------------------ ------- ----- 0 current_act 0.999999 Directive 80 89 1 article_relevant_following_act 0.999995 9 296 297 2 another_act 0.999999 Directive 97/23/EC 301 319 3 article_relevant_following_act 0.999996 13 364 366 4 current_act 0.999999 Directive 375 384 ``` ## Fine-tuning hyper-parameters - learning_rate = 2e-5 - batch_size = 4 - weight_decay=0.01 - max_seq_length = 514 - num_train_epochs = 56
a45a306f802f39048481979e8c2d7937
sd-concepts-library/phoenix-01
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,043
false
### phoenix-01 on Stable Diffusion This is the `<phoenix-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<phoenix-style> 0](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/1.jpeg) ![<phoenix-style> 1](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/2.jpeg) ![<phoenix-style> 2](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/0.jpeg) ![<phoenix-style> 3](https://huggingface.co/sd-concepts-library/phoenix-01/resolve/main/concept_images/3.jpeg)
1c9b2da16a2c5e01dc37dd7918e03906
sd-concepts-library/anime-boy
sd-concepts-library
null
10
0
null
5
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,150
false
### anime boy on Stable Diffusion This is the `<myAItestShota>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<myAItestShota> 0](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/3.jpeg) ![<myAItestShota> 1](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/0.jpeg) ![<myAItestShota> 2](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/2.jpeg) ![<myAItestShota> 3](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/1.jpeg) ![<myAItestShota> 4](https://huggingface.co/sd-concepts-library/anime-boy/resolve/main/concept_images/4.jpeg)
faecf0aa379ec7da4106d2053c44d031
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_cola_256
gokuls
mobilebert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,993
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_cola_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6753 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.8155 | 1.0 | 67 | 0.6867 | 0.0 | | 0.797 | 2.0 | 134 | 0.6862 | 0.0 | | 0.7961 | 3.0 | 201 | 0.6836 | 0.0 | | 0.7944 | 4.0 | 268 | 0.6821 | 0.0 | | 0.7863 | 5.0 | 335 | 0.6753 | 0.0 | | 0.7138 | 6.0 | 402 | 0.6790 | 0.1085 | | 0.6262 | 7.0 | 469 | 0.7238 | 0.1231 | | 0.5782 | 8.0 | 536 | 0.7285 | 0.1281 | | 0.5482 | 9.0 | 603 | 0.7484 | 0.1281 | | 0.5318 | 10.0 | 670 | 0.7918 | 0.1182 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
6da6feefd2ba615855d3a1ecdd09f98e
generateai/my_awesome_model4
generateai
distilbert
16
2
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,267
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model4 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 25.4886 - Accuracy: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.02 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6252 | 1.0 | 1 | 3.9768 | 0.0 | | 1.0027 | 2.0 | 2 | 25.4886 | 0.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
97c90e370036aa43e740510b545cb0c1
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_mrpc_256
gokuls
mobilebert
17
0
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,439
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_data_aug_mrpc_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 - F1: 1.0 - Combined Score: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.1854 | 1.0 | 1959 | 0.0199 | 0.9975 | 0.9982 | 0.9979 | | 0.04 | 2.0 | 3918 | 0.0050 | 0.9975 | 0.9982 | 0.9979 | | 0.0253 | 3.0 | 5877 | 0.0015 | 1.0 | 1.0 | 1.0 | | 0.0175 | 4.0 | 7836 | 0.0003 | 1.0 | 1.0 | 1.0 | | 0.0134 | 5.0 | 9795 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0107 | 6.0 | 11754 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0081 | 7.0 | 13713 | 0.0012 | 1.0 | 1.0 | 1.0 | | 0.0062 | 8.0 | 15672 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0061 | 9.0 | 17631 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0044 | 10.0 | 19590 | 0.0002 | 1.0 | 1.0 | 1.0 | | 0.0041 | 11.0 | 21549 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0034 | 12.0 | 23508 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0029 | 13.0 | 25467 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0016 | 14.0 | 27426 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0019 | 15.0 | 29385 | 0.0140 | 0.9975 | 0.9982 | 0.9979 | | 0.0018 | 16.0 | 31344 | 0.0001 | 1.0 | 1.0 | 1.0 | | 0.0012 | 17.0 | 33303 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0013 | 18.0 | 35262 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0008 | 19.0 | 37221 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0011 | 20.0 | 39180 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0005 | 21.0 | 41139 | 0.0007 | 1.0 | 1.0 | 1.0 | | 0.0009 | 22.0 | 43098 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 23.0 | 45057 | 0.0000 | 1.0 | 1.0 | 1.0 | | 0.0004 | 24.0 | 47016 | 0.0000 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
09ef16cf93f5c9a9b15e9d42e6e0d887
pratultandon/recipe-nlg-gpt2-train11_14
pratultandon
gpt2
18
2
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,213
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-nlg-gpt2-train11_14 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the Recipe-NLG dataset. ## Model description TEST MODEL - LESS THAN 0.10 EPOCHS OF TRAINING COMPLETED ## Intended uses & limitations Experimenting with GPT-2 for recipe generation. ## Training and evaluation data The RecipeNLG(https://huggingface.co/mbien/recipenlg/) dataset was used for this task. 5% of the dataset was held out for evaluation. ## Training procedure RTX 3090 was used on Vast.AI, training took about 14 hours with a batch size of 8, and f16 enabled. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.2
5945e55e6d416ee52e4e3b9e579129f3
Helsinki-NLP/opus-mt-cpp-cpp
Helsinki-NLP
marian
11
9
transformers
0
translation
true
true
false
apache-2.0
['id', 'cpp']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,483
false
### cpp-cpp * source group: Creoles and pidgins, Portuguese-based * target group: Creoles and pidgins, Portuguese-based * OPUS readme: [cpp-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md) * model: transformer * source language(s): ind pap * target language(s): ind pap * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa-msa.msa.msa | 0.7 | 0.149 | | Tatoeba-test.msa-pap.msa.pap | 31.7 | 0.577 | | Tatoeba-test.multi.multi | 21.1 | 0.369 | | Tatoeba-test.pap-msa.pap.msa | 17.7 | 0.197 | ### System Info: - hf_name: cpp-cpp - source_languages: cpp - target_languages: cpp - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['id', 'cpp'] - src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt - src_alpha3: cpp - tgt_alpha3: cpp - short_pair: cpp-cpp - chrF2_score: 0.369 - bleu: 21.1 - brevity_penalty: 0.882 - ref_len: 18.0 - src_name: Creoles and pidgins, Portuguese-based - tgt_name: Creoles and pidgins, Portuguese-based - train_date: 2020-07-26 - src_alpha2: cpp - tgt_alpha2: cpp - prefer_old: False - long_pair: cpp-cpp - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
28fb362a299994381e48d00df57662e3
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_mnli
gokuls
distilbert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,946
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4989 - Accuracy: 0.6525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.575 | 1.0 | 1534 | 0.5428 | 0.5554 | | 0.5345 | 2.0 | 3068 | 0.5205 | 0.5987 | | 0.511 | 3.0 | 4602 | 0.5105 | 0.6222 | | 0.4917 | 4.0 | 6136 | 0.5021 | 0.6360 | | 0.4735 | 5.0 | 7670 | 0.5004 | 0.6470 | | 0.4557 | 6.0 | 9204 | 0.4976 | 0.6534 | | 0.4391 | 7.0 | 10738 | 0.4982 | 0.6606 | | 0.4231 | 8.0 | 12272 | 0.4982 | 0.6586 | | 0.4082 | 9.0 | 13806 | 0.5020 | 0.6587 | | 0.394 | 10.0 | 15340 | 0.5082 | 0.6561 | | 0.3816 | 11.0 | 16874 | 0.5140 | 0.6617 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
afbab1a27e230e80d8a3ed0286b561b9
DOOGLAK/Tagged_Uni_50v0_NER_Model_3Epochs_AUGMENTED
DOOGLAK
bert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['tagged_uni50v0_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,563
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_50v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni50v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6180 - Precision: 0.1063 - Recall: 0.0090 - F1: 0.0166 - Accuracy: 0.7870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 14 | 0.7325 | 0.0 | 0.0 | 0.0 | 0.7803 | | No log | 2.0 | 28 | 0.6458 | 0.0860 | 0.0039 | 0.0075 | 0.7838 | | No log | 3.0 | 42 | 0.6180 | 0.1063 | 0.0090 | 0.0166 | 0.7870 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
ab05c4c11f7848ae933bce17851a0a78
benjamin/roberta-base-wechsel-ukrainian
benjamin
roberta
13
23
transformers
0
fill-mask
true
false
false
mit
['uk']
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,132
false
# roberta-base-wechsel-ukrainian [`roberta-base`](https://huggingface.co/roberta-base) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://aclanthology.org/2022.naacl-main.293/). # Evaluation Evaluation was done on [lang-uk's ner-uk project](https://github.com/lang-uk/ner-uk), the Ukrainian portion of [WikiANN](https://huggingface.co/datasets/wikiann) and the [Ukrainian IU corpus from the Universal Dependencies project](https://github.com/UniversalDependencies/UD_Ukrainian-IU). Evaluation results are the mean of 5 runs with different seeds. __Validation Results__ | | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) | |:-------------------------------------------------|:-------------------------|:-------------|:-------------------------| | roberta-base-wechsel-ukrainian | 88.06 (0.50) | 92.96 (0.08) | 98.70 (0.05) | | roberta-large-wechsel-ukrainian | __89.27 (0.53)__ | __93.22 (0.15)__ | __98.86 (0.03)__ | | | roberta-base-scratch-ukrainian* | 85.49 (0.88) | 91.91 (0.08) | 98.49 (0.04) | | roberta-large-scratch-ukrainian* | 86.54 (0.70) | 92.39 (0.16) | 98.65 (0.09) | | | dbmdz/electra-base-ukrainian-cased-discriminator | 87.49 (0.52) | 93.20 (0.16) | 98.60 (0.03) | | xlm-roberta-base | 86.68 (0.44) | 92.41 (0.13) | 98.53 (0.02) | | xlm-roberta-large | 86.64 (1.61) | 93.01 (0.13) | 98.71 (0.04) | __Test Results__ | | lang-uk NER (Micro F1) | WikiANN (Micro F1) | UD Ukrainian IU POS (Accuracy) | |:-------------------------------------------------|:-------------------------|:-------------|:-------------------------| | roberta-base-wechsel-ukrainian | 90.81 (1.51) | 92.98 (0.12) | 98.57 (0.03) | | roberta-large-wechsel-ukrainian | __91.24 (1.16)__ | __93.22 (0.17)__ | __98.74 (0.06)__ | | | roberta-base-scratch-ukrainian* | 89.57 (1.01) | 92.05 (0.09) | 98.31 (0.08) | | roberta-large-scratch-ukrainian* | 89.96 (0.89) | 92.49 (0.15) | 98.52 (0.04) | | | dbmdz/electra-base-ukrainian-cased-discriminator | 90.43 (1.29) | 92.99 (0.11) | 98.59 (0.06) | | xlm-roberta-base | 90.86 (0.81) | 92.27 (0.09) | 98.45 (0.07) | | xlm-roberta-large | 90.16 (2.98) | 92.92 (0.19) | 98.71 (0.04) | \*trained using the same exact training setup as the wechsel-\* models, but without parameter transfer from WECHSEL. # License MIT
c5cc7e0a9fad28cf4e3c762842dd8081
zdreiosis/bert-finetuned-sem_eval-english
zdreiosis
bert
18
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['3rd', 'generated_from_trainer']
true
true
true
1,054
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-sem_eval-english This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5536 - F1: 0.5455 - Roc Auc: 0.6968 - Accuracy: 0.1839 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.10.3
b9f8f8748d07f8feb1966637aca21c7a
jonatasgrosman/exp_w2v2t_en_hubert_s875
jonatasgrosman
hubert
10
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
458
false
# exp_w2v2t_en_hubert_s875 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f09f0316992f2c86819fd2ba04a68dea
Helsinki-NLP/opus-mt-ca-en
Helsinki-NLP
marian
10
4,012
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
false
### opus-mt-ca-en * source languages: ca * target languages: en * OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ca.en | 51.4 | 0.678 |
9e069619859620d834a458383378b991
HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi
HooshvareLab
bert
12
307
transformers
0
text-classification
true
true
true
apache-2.0
['fa']
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,227
false
# ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models. ## Persian Sentiment [Digikala, SnappFood, DeepSentiPers] It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types. ### DeepSentiPers which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset. **Binary:** 1. Negative (Furious + Angry) 2. Positive (Happy + Delighted) **Multi** 1. Furious 2. Angry 3. Neutral 4. Happy 5. Delighted | Label | # | |:---------:|:----:| | Furious | 236 | | Angry | 1357 | | Neutral | 2874 | | Happy | 2848 | | Delighted | 2516 | **Download** You can download the dataset from: - [SentiPers](https://github.com/phosseini/sentipers) - [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers | |:------------------------:|:-----------:|:-----------:|:-----:|:-------------:| | SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 | | SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 | ## How to use :hugs: | Task | Notebook | |---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Sentiment Analysis | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
4721ef1cf89e603195e8e6c260de468b
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-2
SetFit
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,904
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-8-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1019 - Accuracy: 0.139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1082 | 1.0 | 5 | 1.1432 | 0.0 | | 1.0524 | 2.0 | 10 | 1.1613 | 0.0 | | 1.0641 | 3.0 | 15 | 1.1547 | 0.0 | | 0.9592 | 4.0 | 20 | 1.1680 | 0.0 | | 0.9085 | 5.0 | 25 | 1.1762 | 0.0 | | 0.8508 | 6.0 | 30 | 1.1809 | 0.2 | | 0.7263 | 7.0 | 35 | 1.1912 | 0.2 | | 0.6448 | 8.0 | 40 | 1.2100 | 0.2 | | 0.5378 | 9.0 | 45 | 1.2037 | 0.2 | | 0.5031 | 10.0 | 50 | 1.2096 | 0.2 | | 0.4041 | 11.0 | 55 | 1.2203 | 0.2 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
0d73b8a27c4a294b8d1e585a496a390c
aXhyra/presentation_hate_31415
aXhyra
distilbert
10
8
transformers
0
text-classification
true
false
false
apache-2.0
null
['tweet_eval']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,399
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_hate_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8632 - F1: 0.7730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.436235805743952e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.363 | 1.0 | 282 | 0.4997 | 0.7401 | | 0.2145 | 2.0 | 564 | 0.5071 | 0.7773 | | 0.1327 | 3.0 | 846 | 0.7109 | 0.7645 | | 0.0157 | 4.0 | 1128 | 0.8632 | 0.7730 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
29783b61271ed013dbcd96afdc621dfb
Salvatore/bert-finetuned-mutation-recognition-3
Salvatore
bert
12
5
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,805
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-mutation-recognition-3 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0727 - Dnamutation F1: 0.6484 - Proteinmutation F1: 0.8571 - Snp F1: 1.0 - Precision: 0.7966 - Recall: 0.7625 - F1: 0.7792 - Accuracy: 0.9872 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Dnamutation F1 | Proteinmutation F1 | Snp F1 | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:------------------:|:------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 324 | 0.0323 | 0.5996 | 0.7886 | 1.0 | 0.6583 | 0.7982 | 0.7215 | 0.9901 | | 0.0788 | 2.0 | 648 | 0.0314 | 0.6765 | 0.8783 | 1.0 | 0.7453 | 0.8571 | 0.7973 | 0.9907 | | 0.0788 | 3.0 | 972 | 0.0306 | 0.6391 | 0.8679 | 1.0 | 0.7341 | 0.8232 | 0.7761 | 0.9903 | | 0.0273 | 4.0 | 1296 | 0.0424 | 0.6360 | 0.8714 | 1.0 | 0.7792 | 0.775 | 0.7771 | 0.9885 | | 0.0178 | 5.0 | 1620 | 0.0462 | 0.5885 | 0.8683 | 1.0 | 0.7576 | 0.7589 | 0.7583 | 0.9869 | | 0.0178 | 6.0 | 1944 | 0.0531 | 0.6176 | 0.8701 | 1.0 | 0.7734 | 0.7679 | 0.7706 | 0.9873 | | 0.0165 | 7.0 | 2268 | 0.0573 | 0.6597 | 0.8658 | 1.0 | 0.8022 | 0.775 | 0.7884 | 0.9881 | | 0.0144 | 8.0 | 2592 | 0.0636 | 0.6596 | 0.8454 | 1.0 | 0.7919 | 0.7679 | 0.7797 | 0.9871 | | 0.0144 | 9.0 | 2916 | 0.0710 | 0.6568 | 0.8748 | 1.0 | 0.8159 | 0.7679 | 0.7912 | 0.9872 | | 0.0108 | 10.0 | 3240 | 0.0727 | 0.6484 | 0.8571 | 1.0 | 0.7966 | 0.7625 | 0.7792 | 0.9872 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2 - Datasets 2.0.0 - Tokenizers 0.12.1
dfa713891e04c4df00e86d42bf8ff170
jonatasgrosman/exp_w2v2t_nl_unispeech_s683
jonatasgrosman
unispeech
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['nl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'nl']
false
true
true
469
false
# exp_w2v2t_nl_unispeech_s683 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
7ff4954e65146564bc7558777a02dd41
MultiBertGunjanPatrick/multiberts-seed-2-160k
MultiBertGunjanPatrick
bert
7
4
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-2']
false
true
true
6,483
false
# MultiBERTs Seed 2 Checkpoint 160k (uncased) Seed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-160k') model = BertModel.from_pretrained("multiberts-seed-2-160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
2d7d275d8bef23eac490aae38f022dd9
EmanElgallad/whisper-small-ar
EmanElgallad
whisper
15
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
null
null
0
0
0
0
0
0
0
['hf-ast-leaderboard', 'generated_from_trainer']
true
true
true
1,536
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small arb - GP This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Dialect Arabic dataset. It achieves the following results on the evaluation set: - Loss: 2.1489 - Wer: 110.7984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9933 | 1.89 | 1000 | 2.0970 | 125.2555 | | 1.3119 | 3.79 | 2000 | 1.9818 | 113.1290 | | 0.7643 | 5.68 | 3000 | 2.0559 | 115.4176 | | 0.5144 | 7.58 | 4000 | 2.1489 | 110.7984 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
a42302b518fd3f7daad223cf34c2545b
elopezlopez/distilbert-base-uncased_fold_8_ternary_v1
elopezlopez
distilbert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,659
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_fold_8_ternary_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8474 - F1: 0.8022 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.5398 | 0.7838 | | 0.5509 | 2.0 | 578 | 0.6062 | 0.7703 | | 0.5509 | 3.0 | 867 | 0.6563 | 0.7666 | | 0.2366 | 4.0 | 1156 | 0.7688 | 0.7961 | | 0.2366 | 5.0 | 1445 | 1.0968 | 0.7690 | | 0.1247 | 6.0 | 1734 | 1.1414 | 0.7924 | | 0.0482 | 7.0 | 2023 | 1.2159 | 0.7875 | | 0.0482 | 8.0 | 2312 | 1.2703 | 0.7887 | | 0.0245 | 9.0 | 2601 | 1.3401 | 0.7985 | | 0.0245 | 10.0 | 2890 | 1.4645 | 0.7961 | | 0.0149 | 11.0 | 3179 | 1.5632 | 0.7801 | | 0.0149 | 12.0 | 3468 | 1.5249 | 0.7875 | | 0.0124 | 13.0 | 3757 | 1.6263 | 0.7948 | | 0.0038 | 14.0 | 4046 | 1.8059 | 0.7764 | | 0.0038 | 15.0 | 4335 | 1.7649 | 0.7776 | | 0.0061 | 16.0 | 4624 | 1.8293 | 0.7850 | | 0.0061 | 17.0 | 4913 | 1.8316 | 0.7887 | | 0.0022 | 18.0 | 5202 | 1.7628 | 0.7973 | | 0.0022 | 19.0 | 5491 | 1.8763 | 0.7862 | | 0.002 | 20.0 | 5780 | 1.8409 | 0.7899 | | 0.0026 | 21.0 | 6069 | 1.8146 | 0.8022 | | 0.0026 | 22.0 | 6358 | 1.8420 | 0.7973 | | 0.0008 | 23.0 | 6647 | 1.8683 | 0.8010 | | 0.0008 | 24.0 | 6936 | 1.8571 | 0.8010 | | 0.0015 | 25.0 | 7225 | 1.8474 | 0.8022 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
d96232d0a16b857523b8c8365422ce72
kakaobrain/coyo-align-b7-base
kakaobrain
null
4
0
null
0
null
false
false
false
apache-2.0
['en']
['kakaobrain/coyo-700m']
null
0
0
0
0
0
0
0
['align', 'clip']
false
true
true
1,466
false
# Model Details This is an unofficial implementation of [ALIGN](https://arxiv.org/abs/2102.05918) trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). The official ALIGN is trained on its dataset of 1.8B samples. That dataset is not released to the public. Instead, we trained our implementation of ALIGN model on [COYO-700M](https://github.com/kakaobrain/coyo-dataset). It's developed by Kakao Brain to validate the performance of COYO-700M dataset on a large-scale model. The training took about 8 days on TPU V3-512. ## Model Date April 2022 ## Model Type This is dual encoder model where - image encoder is using EfficientNet-B7 architecture - text encoder is using BERT-base architecture # Training data This model is trained on [COYO-700M](https://github.com/kakaobrain/coyo-dataset) dataset. # Evaluation results | | Dataset | ImageNet | Flickr30k | | MsCOCO | | |----------------------------------|:----------:|:--------:|:---------:|:-------:|:-------:|:-------:| | | | KNN | I2T R@1 | T2I R@1 | I2T R@1 | T2I R@1 | | ALIGN-L2-Large(Google) | ALIGN 1.8B | 76.4 | 88.6 | 75.7 | 58.6 | 45.6 | | ALIGN-B7-Base(Google) | ALIGN 1.8B | 69.3 | - | - | 55.4 | 41.7 | | COYO-ALIGN-B7-Base(Kakao Brain) | COYO-700M | 68.6 | 88.1 | 73.2 | 61.2 | 43.1 |
3b66b9e056e71b524f1a96a1ee93b735
nlpie/clinical-miniALBERT-312
nlpie
bert
8
2
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,552
false
# Model miniALBERT is a recursive transformer model which uses cross-layer parameter sharing, embedding factorisation, and bottleneck adapters to achieve high parameter efficiency. Since miniALBERT is a compact model, it is trained using a layer-to-layer distillation technique, using the BioClinicalBERT model as the teacher. This model is trained for 3 epochs on the MIMIC-III notes dataset. In terms of architecture, this model uses an embedding dimension of 312, a hidden size of 768, an MLP expansion rate of 4, and a reduction factor of 16 for bottleneck adapters. In general, this model uses 6 recursions and has a unique parameter count of 18 million parameters. # Usage Since miniALBERT uses a unique architecture it can not be loaded using ts.AutoModel for now. To load the model, first, clone the miniALBERT GitHub project, using the below code: ```bash git clone https://github.com/nlpie-research/MiniALBERT.git ``` Then use the ```sys.path.append``` to add the miniALBERT files to your project and then import the miniALBERT modeling file using the below code: ```bash import sys sys.path.append("PATH_TO_CLONED_PROJECT/MiniALBERT/") from minialbert_modeling import MiniAlbertForSequenceClassification, MiniAlbertForTokenClassification ``` Finally, load the model like a regular model in the transformers library using the below code: ```Python # For NER use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/clinical-miniALBERT-312") # For Sequence Classification use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/clinical-miniALBERT-312") ``` In addition, For efficient fine-tuning using the pre-trained bottleneck adapters use the below code: ```Python model.trainAdaptersOnly() ``` # Citation If you use the model, please cite our paper: ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.04725, doi = {10.48550/ARXIV.2302.04725}, url = {https://arxiv.org/abs/2302.04725}, author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Group, ISARIC Clinical Characterisation and Clifton, Lei and Merson, Laura and Clifton, David A.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7, 68T50}, title = {Lightweight Transformers for Clinical Natural Language Processing}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
b52efa739dc997944267ac3d2c0a3365
jonatasgrosman/exp_w2v2t_ar_vp-nl_s756
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ar']
false
true
true
469
false
# exp_w2v2t_ar_vp-nl_s756 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8da5c9613105679388a1c930e97a0a8f
MayaGalvez/bert-base-multilingual-cased-finetuned-multilingual-ner
MayaGalvez
bert
10
11
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,958
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-finetuned-multilingual-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2352 - Precision: 0.8109 - Recall: 0.8332 - F1: 0.8219 - Accuracy: 0.9264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7301 | 0.16 | 100 | 0.3827 | 0.6189 | 0.7009 | 0.6573 | 0.8734 | | 0.3841 | 0.32 | 200 | 0.3195 | 0.7057 | 0.7511 | 0.7277 | 0.8922 | | 0.3451 | 0.48 | 300 | 0.2862 | 0.7094 | 0.7750 | 0.7407 | 0.8952 | | 0.3187 | 0.65 | 400 | 0.2735 | 0.7372 | 0.7802 | 0.7581 | 0.9019 | | 0.3058 | 0.81 | 500 | 0.2533 | 0.7536 | 0.8015 | 0.7768 | 0.9052 | | 0.2918 | 0.97 | 600 | 0.2458 | 0.7587 | 0.8085 | 0.7828 | 0.9126 | | 0.2425 | 1.13 | 700 | 0.2379 | 0.7742 | 0.7976 | 0.7857 | 0.9150 | | 0.2387 | 1.29 | 800 | 0.2300 | 0.7772 | 0.8108 | 0.7936 | 0.9165 | | 0.2125 | 1.45 | 900 | 0.2387 | 0.7900 | 0.8130 | 0.8014 | 0.9180 | | 0.2026 | 1.62 | 1000 | 0.2317 | 0.7877 | 0.8152 | 0.8012 | 0.9186 | | 0.1963 | 1.78 | 1100 | 0.2326 | 0.7842 | 0.8269 | 0.8049 | 0.9220 | | 0.2052 | 1.94 | 1200 | 0.2247 | 0.7924 | 0.8234 | 0.8076 | 0.9212 | | 0.1868 | 2.1 | 1300 | 0.2410 | 0.7903 | 0.8282 | 0.8088 | 0.9204 | | 0.1556 | 2.26 | 1400 | 0.2428 | 0.8064 | 0.8317 | 0.8189 | 0.9256 | | 0.153 | 2.42 | 1500 | 0.2316 | 0.8017 | 0.8282 | 0.8147 | 0.9238 | | 0.1484 | 2.58 | 1600 | 0.2379 | 0.8054 | 0.8338 | 0.8194 | 0.9258 | | 0.137 | 2.75 | 1700 | 0.2331 | 0.8101 | 0.8324 | 0.8211 | 0.9270 | | 0.1638 | 2.91 | 1800 | 0.2352 | 0.8109 | 0.8332 | 0.8219 | 0.9264 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
768afb7d207e16b1b9a1ba454cc97814
jejomi/xls-r-ta
jejomi
wav2vec2
19
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ta']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
true
true
true
1,092
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TA dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
d360c8c846dcab2b25a590ea9cb075ae
Lemswasabi/wav2vec2-large-xlsr-53-842h-luxembourgish-4h-with-lm
Lemswasabi
wav2vec2
14
0
transformers
0
automatic-speech-recognition
true
false
false
mit
['lb']
null
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'generated_from_trainer']
false
true
true
1,825
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ## Model description We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 4h of labelled Luxembourgish speech from the same domain. Additionally, we rescore the output transcription with a 5-gram language model trained on text corpora from RTL.lu and the Luxembourgish parliament. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1 ## Citation This model is a result of our paper `IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS` submitted to the [IEEE SLT 2022 workshop](https://slt2022.org/) ``` @misc{lb-wav2vec2, author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.}, keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language}, title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS}, year = {2022}, copyright = {2023 IEEE} } ```
5a96abfce02a90105e28575aa2aa1727
henryscheible/eval_rte
henryscheible
bert
11
1
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
883
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
4496c8fa8b56cca208ff5b0e33fc6cf0
flaubert/flaubert_small_cased
flaubert
flaubert
7
6,493
transformers
1
fill-mask
true
false
false
mit
['fr']
['flaubert']
null
0
0
0
0
0
0
0
['bert', 'language-model', 'flaubert', 'flue', 'french', 'flaubert-small', 'cased']
false
true
true
4,317
false
# FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
d574860680d0b7ef0a8c2b78d850451f
mgoudarz/xlm-roberta-base-finetuned-panx-de-fr
mgoudarz
xlm-roberta
9
5
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1654 - F1: 0.8590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 | | 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 | | 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
5c19b99d99f439101f3811182943371a
timm/efficientformerv2_l.snap_dist_in1k
timm
null
4
114
timm
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k']
null
0
0
0
0
0
0
0
['image-classification', 'timm']
false
true
true
4,452
false
# Model card for efficientformerv2_l.snap_dist_in1k A EfficientFormer-V2 image classification model. Pretrained with distillation on ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 26.3 - GMACs: 2.6 - Activations (M): 18.5 - Image size: 224 x 224 - **Original:** https://github.com/snap-research/EfficientFormer - **Papers:** - Rethinking Vision Transformers for MobileNet Size and Speed: https://arxiv.org/abs/2212.08059 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('efficientformerv2_l.snap_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'efficientformerv2_l.snap_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'efficientformerv2_l.snap_dist_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for efficientformerv2_l: # torch.Size([2, 40, 56, 56]) # torch.Size([2, 80, 28, 28]) # torch.Size([2, 192, 14, 14]) # torch.Size([2, 384, 7, 7]) print(o.shape) ``` ## Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |efficientformerv2_l.snap_dist_in1k |83.628|96.54 |26.32 |224 | |efficientformer_l7.snap_dist_in1k |83.368|96.534|82.23 |224 | |efficientformer_l3.snap_dist_in1k |82.572|96.24 |31.41 |224 | |efficientformerv2_s2.snap_dist_in1k|82.128|95.902|12.71 |224 | |efficientformer_l1.snap_dist_in1k |80.496|94.984|12.29 |224 | |efficientformerv2_s1.snap_dist_in1k|79.698|94.698|6.19 |224 | |efficientformerv2_s0.snap_dist_in1k|76.026|92.77 |3.6 |224 | ## Citation ```bibtex @article{li2022rethinking, title={Rethinking Vision Transformers for MobileNet Size and Speed}, author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian}, journal={arXiv preprint arXiv:2212.08059}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
3d1be6bd97d4dcd14cc4ec03ccef261c
sarahmiller137/bioclinical-bert-ft-m3-lc
sarahmiller137
bert
8
1
transformers
0
text-classification
true
false
false
cc
['en']
['MIMIC-III\xa0']
null
0
0
0
0
0
0
0
['text classification']
false
true
true
1,477
false
## Model information: This model is the [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0. ## Intended uses: This model is intended to be used to classify texts to identify the presence of lung cancer. The model will predict lables of [0,1]. ## Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use - - [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf) - [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) ## How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc") model = AutoModel.from_pretrained("sarahmiller137/bioclinical-bert-ft-m3-lc") ```
c647738215d2a1349a328a543fe852cf
deepparag/Aeona
deepparag
gpt2
10
3,567
transformers
14
conversational
true
false
false
mit
null
['blended_skill_talk']
null
1
0
1
0
1
1
0
['conversational']
false
true
true
4,285
false
# Aeona | Chatbot ![Aeona Banner](https://github.com/deepsarda/Aeona/blob/master/dashboard/static/banner.png?raw=true) An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small). Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot. Using an AIML Chatbot will allow you to hardcode some replies also. # AEONA Aeona is an chatbot which hope's to be able to talk with humans as if its an friend! It's main target platform is discord. You can invite the bot [here](https://aeona.xyz). To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyz/). Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user. # Participate and Help the AI improve or just hang out at [hugging face discussions](https://huggingface.co/deepparag/Aeona/discussions) ## Goals The goal is to create an AI which will work with AIML in order to create the most human like AI. #### Why not an AI on its own? For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code! The goal of the AI is to generate responses where the AIML fails. Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible! So we use 3 dataset:- 1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines! 2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages! 3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time! ## Training The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated. This leads to them covering each others issues! The AI has a context of 6 messages which means it will reply until the 4th message from user. [Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1) ## Tips for Hugging Face interference I recommend send the user input, previous 3 AI and human responses. Using more context than this will lead to useless responses but using less is alright but the responses may be random. ## Evaluation Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics. | Model | Perplexity | |---|---| | Seq2seq Baseline [3] | 29.8 | | Wolf et al. [5] | 16.3 | | GPT-2 baseline | 99.5 | | DialoGPT baseline | 56.6 | | DialoGPT finetuned | 11.4 | | PersonaGPT | 10.2 | | **Aeona** | **7.9** | ## Usage Example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona") model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=4, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
0e4bb1ddaf15d291d6c7fb78925c82e4
clboetticher/xlm-roberta-base-finetuned-panx-en
clboetticher
xlm-roberta
10
9
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4043 - F1: 0.6886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 | | 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 | | 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
5eb416ae7d641359ef6a2a0464708034
JorisCos/ConvTasNet_Libri3Mix_sepclean_8k
JorisCos
null
3
4
asteroid
0
audio-to-audio
true
false
false
cc-by-sa-4.0
null
['Libri3Mix', 'sep_clean']
null
0
0
0
0
0
0
0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
true
true
1,499
false
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_clean` task of the Libri3Mix dataset. Training config: ```yml data: n_src: 3 sample_rate: 8000 segment: 3 task: sep_clean train_dir: data/wav8k/min/train-360 valid_dir: data/wav8k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 24 early_stop: true epochs: 200 half_lr: true num_workers: 4 ``` Results : On Libri3Mix min test set : ```yaml si_sdr: 8.581797049575108 si_sdr_imp: 11.977037288467368 sdr' 9.305885208641385 sdr_imp: 12.3943409734845 sir: 16.42030534048559 sir_imp: 19.508759460400984 sar: 10.641943911079238 sar_imp: -56.4345187842095 stoi: 0.8365148408724333 stoi_imp: 0.24401766199806396 ``` License notice: This work "ConvTasNet_Libri3Mix_sepclean_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris.
d5d9c22b27ae4f691c4e683cf9534ffd
malysheva42/spaeti_store_2
malysheva42
null
20
57
diffusers
0
text-to-image
true
false
false
creativeml-openrail-m
['en']
['malysheva42/spaeti_store']
null
1
0
1
0
0
0
0
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
true
true
1,139
false
# DreamBooth model for the spaeti concept trained by malysheva42 on the malysheva42/spaeti_store dataset. This is a Stable Diffusion model fine-tuned on the spaeti (späti) concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of spaeti store** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `store` images for the wildcard theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('malysheva42/spaeti_store_2') image = pipeline().images[0] image ``` ## Examples 1. a picture of spaeti store in the forest ![a picture of spaeti store in the forest](sample-image-forest.png) 2. a picture of spaeti store on the beach near the sea, best quality ![a picture of spaeti store on the beach near the sea, best quality](sample-image-beach.png) 3. a picture of spaeti store in the snow ![a picture of spaeti store in the snow](sample-image-snow.png)
e8afe4a2fcb6313b17d7872a9c864dbe
wietsedv/xlm-roberta-base-ft-udpos28-hr
wietsedv
xlm-roberta
8
10
transformers
0
token-classification
true
false
false
apache-2.0
['hr']
['universal_dependencies']
null
0
0
0
0
0
0
0
['part-of-speech', 'token-classification']
true
true
true
568
false
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Croatian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr") ```
d4e1338f875a53e0f021124b415dd748
cjdentra/distilbert-base-uncased-finetuned-emotion
cjdentra
distilbert
14
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
933
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
575fe0f51e8bfb39c63fdc50a763816c
RUCAIBox/mvp-question-generation
RUCAIBox
mvp
9
452
transformers
1
text2text-generation
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['text-generation', 'text2text-generation']
false
true
true
3,762
false
# MVP-question-generation The MVP-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MVP-question-generation is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts. MVP-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?'] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
64b652e1f0a7bb769c0cec33ec5ccc31
tensorspeech/tts-tacotron2-synpaflex-fr
tensorspeech
null
5
0
tensorflowtts
0
text-to-speech
false
false
false
apache-2.0
['fr']
['synpaflex']
null
0
0
0
0
0
0
0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
true
true
2,719
false
# Tacotron 2 with Guided Attention trained on Synpaflex (Fr) This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS). ## Install TensorFlowTTS First of all, please install TensorFlowTTS with the following command: ``` pip install TensorFlowTTS ``` ### Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr") tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr") text = "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous étions amis" input_ids = processor.text_to_sequence(text) decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), ) ``` #### Referencing Tacotron 2 ``` @article{DBLP:journals/corr/abs-1712-05884, author = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and R. J. Skerry{-}Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu}, title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions}, journal = {CoRR}, volume = {abs/1712.05884}, year = {2017}, url = {http://arxiv.org/abs/1712.05884}, archivePrefix = {arXiv}, eprint = {1712.05884}, timestamp = {Thu, 28 Nov 2019 08:59:52 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` #### Referencing TensorFlowTTS ``` @misc{TFTTS, author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata, Trinh Le and Yunchao He}, title = {TensorflowTTS}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}}, } ```
94530302014aebd82c1212ea8a1b1860
NOISK8/laywaxys
NOISK8
null
15
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
609
false
### laywaxys Dreambooth model trained by NOISK8 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
4c58431cba117417f39d6c47550c66d5
RUCAIBox/mtl-question-generation
RUCAIBox
mvp
9
1
transformers
1
text2text-generation
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['text-generation', 'text2text-generation']
false
true
true
3,708
false
# MTL-question-generation The MTL-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP). ## Model Description MTL-question-generation is supervised pre-trained using a mixture of labeled question generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture. MTL-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA. ## Example ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-generation") >>> inputs = tokenizer( ... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['A bolo punch and a hook are both punches used in what sport?] ``` ## Related Models **MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp). **Prompt-based models**: - MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task). - MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization). - MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog). - MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text). - MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story). - MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering). - MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation). - MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog). **Multi-task models**: - MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization). - MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog). - MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text). - MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story). - MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering). - MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation). - MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog). ## Citation ```bibtex @article{tang2022mvp, title={MVP: Multi-task Supervised Pre-training for Natural Language Generation}, author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong}, journal={arXiv preprint arXiv:2206.12131}, year={2022}, url={https://arxiv.org/abs/2206.12131}, } ```
1030bbec3fd3b5b8460f5ad1351bdc8a
ericRosello/bert-base-uncased-finetuned-squad-frozen-v2
ericRosello
bert
12
26
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,342
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.4571 ## Model description Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder. ## Training and evaluation data Achieved EM: 76.77388836329234, F1: 85.41893520501723 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 1.2944 | 1.0 | 44262 | 1.3432 | | 1.0152 | 2.0 | 88524 | 1.3450 | | 1.0062 | 3.0 | 132786 | 1.4571 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
c770784a9475a3977dc2229e7963d3a5
sanchit-gandhi/whisper-small-hi-no-tensorboard
sanchit-gandhi
whisper
65
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['hi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,498
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4519 - Wer: 32.01 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | WER | |:-------------:|:-----:|:----:|:---------------:|:-----:| | 0.1011 | 2.44 | 1000 | 0.3075 | 34.63 | | 0.0264 | 4.89 | 2000 | 0.3558 | 33.13 | | 0.0025 | 7.33 | 3000 | 0.4214 | 32.59 | | 0.0006 | 9.78 | 4000 | 0.4519 | 32.01 | | 0.0002 | 12.22 | 5000 | 0.4679 | 32.10 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1 - Datasets 2.5.3.dev0 - Tokenizers 0.12.1
39c29c4fce1c9fc9f7e27ade246f9b63
cm-mueller/BACnet-Klassifizierung-Gewerke-4.0
cm-mueller
bert
16
2
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,449
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BACnet-Klassifizierung-Gewerke-4.0 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0615 - F1: 0.9738 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0772 | 1.0 | 726 | 0.1155 | 0.9584 | | 0.0581 | 2.0 | 1452 | 0.0804 | 0.9518 | | 0.0616 | 3.0 | 2178 | 0.0756 | 0.9627 | | 0.0368 | 4.0 | 2904 | 0.0647 | 0.9738 | | 0.0223 | 5.0 | 3630 | 0.0615 | 0.9738 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.12.1
b39a08ccbe6b05930945f277f3f7074b
kejian/mighty-conditional
kejian
gpt2
39
10
transformers
0
null
true
false
false
apache-2.0
['en']
['kejian/codeparrot-train-more-filter-3.3b-cleaned']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
5,234
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mighty-conditional This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0008 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.23.0 - Pytorch 1.13.0+cu116 - Datasets 2.0.0 - Tokenizers 0.12.1 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.1, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0}, 'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True}, 'generation': {'batch_size': 128, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'bad_words_ids': [[32769]], 'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_hits_threshold': 0, 'num_samples': 2048, 'prefix': '<|aligned|>', 'use_prompt_for_scoring': False}, {'display_as_html': True, 'generate_kwargs': {'bad_words_ids': [[32769]], 'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_hits_threshold': 0, 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>', 'should_insert_prefix': True}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'codeparrot/codeparrot-small'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'mighty-conditional', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0008, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/kejian/uncategorized/runs/zpigcpaa
d01bc549df32be8bd32f7a17ddec563a
Formzu/bart-base-japanese
Formzu
mbart
8
127
transformers
1
text2text-generation
true
false
false
mit
['ja']
['wikipedia']
null
0
0
0
0
0
0
0
['bart', 'pytorch']
false
true
true
2,527
false
# bart-base-japanese This model is converted from the original [Japanese BART Pretrained model](https://nlp.ist.i.kyoto-u.ac.jp/?BART%E6%97%A5%E6%9C%AC%E8%AA%9EPretrained%E3%83%A2%E3%83%87%E3%83%AB) released by Kyoto University. Both the encoder and decoder outputs are identical to the original Fairseq model. ### How to use the model The input text should be tokenized by [BartJapaneseTokenizer](https://huggingface.co/Formzu/bart-base-japanese/blob/main/tokenization_bart_japanese.py). Tokenizer requirements: * [Juman++](https://github.com/ku-nlp/jumanpp) * [zenhan](https://pypi.org/project/zenhan/) * [pyknp](https://pypi.org/project/pyknp/) * [sentencepiece](https://pypi.org/project/sentencepiece/) #### Simple FillMaskPipeline ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline model_name = "Formzu/bart-base-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer) out = fill_mask(masked_text) print(out) # [{'score': 0.19255658984184265, 'token': 1718, 'token_str': 'よく', 'sequence': '天気 が よく から 散歩 し ましょう 。'}, # {'score': 0.14426815509796143, 'token': 5478, 'token_str': '良く', 'sequence': '天気 が 良く から 散歩 し ましょう 。'}, # {'score': 0.05554169788956642, 'token': 6561, 'token_str': '悪い', 'sequence': '天気 が 悪い から 散歩 し ましょう 。'}, # {'score': 0.05524599179625511, 'token': 3553, 'token_str': '良い', 'sequence': '天気 が 良い から 散歩 し ましょう 。'}, # {'score': 0.03720080852508545, 'token': 1370, 'token_str': '良', 'sequence': '天気 が 良 から 散歩 し ましょう 。'}] ``` #### Text Generation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "Formzu/bart-base-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" inp = tokenizer(masked_text, return_tensors='pt').to(device) out = model.generate(**inp, num_beams=1, min_length=0, max_length=20, early_stopping=True, no_repeat_ngram_size=2) res = "".join(tokenizer.decode(out.squeeze(0).tolist(), skip_special_tokens=True).split(" ")) print(res) # 天気がよくなってから散歩しましょう。天気のよく合っているところにいる ``` ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Tokenizers 0.12.1
982a6806eff3f00d08fa20ac083b88ed
raghavsharma06/test_trainer
raghavsharma06
bert
10
11
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
886
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
d812ffbbe079779d882f82ddc13b6134
3ebdola/Dialectal-Arabic-XLM-R-Base
3ebdola
xlm-roberta
8
2
transformers
0
fill-mask
true
false
false
mit
['multilingual', 'af', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', False, 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'sa', 'sd', 'si', 'sk', 'sl', 'so', 'sq', 'sr', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'xh', 'yi', 'zh']
null
null
0
0
0
0
0
0
0
['Dialectal Arabic', 'Arabic', 'sequence labeling', 'Named entity recognition', 'Part-of-speech tagging', 'Zero-shot transfer learning', 'bert']
false
true
true
2,088
false
# Dialectal Arabic XLM-R Base This is a repo of the language model used for "AdaSL: An Unsupervised Domain Adaptation framework for Arabic multi-dialectal Sequence Labeling". The state-of-the-art method for sequence labeling on multi-dialect Arabic. ### About the Dialectal-Arabic-XLM-R-Base model This model is an trained as a further pre-trained of XLM-RoBERTa base using the Masked-language modeling on a dialectal Arabic corpus. ### About the Dialectal-Arabic-XLM-R-Base model training corpora We have built a 5 million Tweets corpus from Twitter. The crawled tweets cover the dialects of the four Arabic world regions (EGY, GLF, LEV, and MAG regions), as well as MSA. The collected corpus consists of one million (1M) tweets per Arabic variant. We did not perform any text pre-processing on the tweets, except by removing tweets that have a small length (tweets containing less than four words). ### Usage The model weights can be loaded using `transformers` library by HuggingFace. ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("3ebdola/Dialectal-Arabic-XLM-R-Base") model = AutoModel.from_pretrained("3ebdola/Dialectal-Arabic-XLM-R-Base") text = "هذا مثال لنص باللغة العربية, يمكنك استعمال اللهجات العربية أيضا" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Citation ``` @article{ELMEKKI2022102964, title = {AdaSL: An Unsupervised Domain Adaptation framework for Arabic multi-dialectal Sequence Labeling}, journal = {Information Processing & Management}, volume = {59}, number = {4}, pages = {102964}, year = {2022}, issn = {0306-4573}, doi = {https://doi.org/10.1016/j.ipm.2022.102964}, url = {https://www.sciencedirect.com/science/article/pii/S0306457322000814}, author = {Abdellah {El Mekki} and Abdelkader {El Mahdaouy} and Ismail Berrada and Ahmed Khoumsi}, keywords = {Dialectal Arabic, Arabic natural language processing, Domain adaptation, Multi-dialectal sequence labeling, Named entity recognition, Part-of-speech tagging, Zero-shot transfer learning} } ```
2244766a2d959b11b06901054d8c480e
lfcc/bert-portuguese-squad2
lfcc
bert
9
1
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
831
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-portuguese-squad2 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on SQuAD_v2 dataset, translated for portuguese. ## Model description More information needed ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.6.1 - Tokenizers 0.13.2
bb19bed2dcc7df1cd0e4717ddc284684
lmqg/bart-large-squadshifts-amazon-qg
lmqg
bart
15
7
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['en']
['lmqg/qg_squadshifts']
null
0
0
0
0
0
0
0
['question generation']
true
true
true
4,074
false
# Model Card of `lmqg/bart-large-squadshifts-amazon-qg` This model is fine-tuned version of [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: amazon) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (amazon) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-large-squadshifts-amazon-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-large-squadshifts-amazon-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squadshifts-amazon-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 92.49 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 28.9 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 19.57 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 13.66 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 9.8 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 23.79 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 63.31 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 28.69 | amazon | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: amazon - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: lmqg/bart-large-squad - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 32 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squadshifts-amazon-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
278c058bc8673b214e6beda7fc3249be
chaitanya97/wav2vec2-large-xls-r-300m-hindi-colab
chaitanya97
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,638
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 7.2810 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 23.4144 | 0.8 | 4 | 29.5895 | 1.0 | | 19.1336 | 1.6 | 8 | 18.3354 | 1.0 | | 12.1562 | 2.4 | 12 | 11.2065 | 1.0 | | 8.1523 | 3.2 | 16 | 8.8674 | 1.0 | | 6.807 | 4.0 | 20 | 7.8106 | 1.0 | | 6.1583 | 4.8 | 24 | 7.2810 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
d35d6da8f58836f7b120ea7ec24058a5
willcai/wav2vec2-large-xls-r-300m-tr-colab
willcai
wav2vec2
11
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,234
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tr-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4121 - Wer: 0.3112 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.1868 | 1.83 | 400 | 0.9812 | 0.8398 | | 0.691 | 3.67 | 800 | 0.5571 | 0.6298 | | 0.3555 | 5.5 | 1200 | 0.4676 | 0.4779 | | 0.2451 | 7.34 | 1600 | 0.4572 | 0.4541 | | 0.1844 | 9.17 | 2000 | 0.4743 | 0.4389 | | 0.1541 | 11.01 | 2400 | 0.4583 | 0.4300 | | 0.1277 | 12.84 | 2800 | 0.4565 | 0.3950 | | 0.1122 | 14.68 | 3200 | 0.4761 | 0.4087 | | 0.0975 | 16.51 | 3600 | 0.4654 | 0.3786 | | 0.0861 | 18.35 | 4000 | 0.4503 | 0.3667 | | 0.0775 | 20.18 | 4400 | 0.4600 | 0.3581 | | 0.0666 | 22.02 | 4800 | 0.4350 | 0.3504 | | 0.0627 | 23.85 | 5200 | 0.4211 | 0.3349 | | 0.0558 | 25.69 | 5600 | 0.4390 | 0.3333 | | 0.0459 | 27.52 | 6000 | 0.4218 | 0.3185 | | 0.0439 | 29.36 | 6400 | 0.4121 | 0.3112 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
a8cac367e7bd4dd88f9ba6d8915ea233
alkzar90/sd-class-ukiyo-e-256
alkzar90
null
10
4
diffusers
1
unconditional-image-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
true
true
1,250
false
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of Ukiyo-e images ✍ 🎨. The model was train using fine-tuning with the google/ddpm-celebahq-256 pretrain-model and the dataset: https://huggingface.co/datasets/huggan/ukiyoe2photo ![](https://huggingface.co/alkzar90/sd-class-ukiyo-e-256/resolve/main/ukyo-e-portrait.jpeg) * Google Colab notebook for experiment with the model and the sampling process using a Gradio App: https://colab.research.google.com/drive/1F7SH4T9y5fJKxj5lU9HqTzadv836Zj_G?usp=sharing * Weights&Biases dashboard with training information: https://wandb.ai/alcazar90/fine-tuning-a-diffusion-model?workspace=user-alcazar90 ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('alkzar90/sd-class-ukiyo-e-256') image = pipeline().images[0] image ``` ## Guidance **Prompt:** _A sakura tree_ ![](https://huggingface.co/alkzar90/sd-class-ukiyo-e-256/resolve/main/ukyo-e-sakura-tree.jpeg) **Prompt:** _An island with sunset at background_ ![](https://huggingface.co/alkzar90/sd-class-ukiyo-e-256/resolve/main/ukyo-e-sunset-island.jpeg)
f9eb12bef47e485fadf06dcce5a3c31e
sledz08/finetuned-bert-piqa
sledz08
bert
12
61
transformers
0
multiple-choice
true
false
false
apache-2.0
null
['piqa']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,380
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bert-piqa This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the piqa dataset. It achieves the following results on the evaluation set: - Loss: 0.6603 - Accuracy: 0.6518 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 251 | 0.6751 | 0.6115 | | 0.6628 | 2.0 | 502 | 0.6556 | 0.6534 | | 0.6628 | 3.0 | 753 | 0.6603 | 0.6518 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
6015ef2853476e8a09128cc330933890
ckiplab/albert-base-chinese-ner
ckiplab
albert
7
702
transformers
4
token-classification
true
false
false
gpl-3.0
['zh']
null
null
0
0
0
0
0
0
0
['pytorch', 'token-classification', 'albert', 'zh']
false
true
true
964
false
# CKIP ALBERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
c3622244de623f90e4c578bdca31c744
DucHaiten/DucHaitenAIart
DucHaiten
null
28
6,326
diffusers
74
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
7
2
5
0
7
6
1
['stable-diffusion', 'text-to-image', 'image-to-image', 'diffusers']
false
true
true
9,526
false
**Big update DucHaitenAIart_v2.0** *DucHaitenAIart_v2.0 is an extended version of v1.2, v2.0 improves everything to be better than v1.2, can do things 1.2 cannot do, more diverse and more detailed* Looks like people haven't used the full power of the model yet, so I'll post a few more example photos of what my model can do, it's all just text to image, no editing. **Please support me by becoming a patron:** patreon.com/duchaitenreal ***** All sample images only use text to image, no editing, no image to image, no restore face no highres fix no extras. ***** Hello, sorry for my lousy english. After days of trying and retrying hundreds of times, with dozens of different versions, DucHaitenAIart finally released the official version. Improved image sharpness, more realistic lighting correction, more shooting angles, the only downside is that it's less flexible and less random than beta-v6.0, so I'm still will leave beta-v6.0 for anyone to download. This model can create NSFW images but since it is not a hentai and porn model, anything really hardcore will be difficult to create. But, To make the model work better with NSFW images, add “hentai, porn, rule 34” to the prompt Always add to the prompt “masterpiece, best quality, 1girl or 1boy, realistic, anime or cartoon (it's two different styles, but I personally prefer anime), 3D, pixar, (add “pin-up”) ” if you are going to give your character a sexy pose), highly detail eyes, perfect eyes, both eyes are the same, (if you don't want to draw eyes, don't add them), smooth, perfect face, hd, 2k, 4k , 8k, 16k Add to the prompt: “extremely detailed 8K, high resolution, ultra quality” to further enhance the image quality, but it may weaken the AI's interest in other keywords. You can add “glare, Iridescent, Global illumination, real hair movement, realistic light, realistic shadow” to the prompt to create a better lighting effect, but the image will then become too realistic, if you don't want to. Please adjust it accordingly. ***** Sampler: Euler a You can also create 2D anime images by the following ways: + Prompt: masterpiece, best quality, 1girl, (anime), (manga), (2D), half body, perfect eyes, both eyes are the same, Global illumination, soft light, dream light, digital painting, extremely detailed CGI anime, hd, 2k, 4k background (the fewer keywords that increase the quality, the easier it is to create 2d anime images, so it is possible to remove some keywords in the prompt but cannot add keywords to increase the high quality, because it can turn 2d images into 2.5d) + negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, Lots of hands, not perfect, extra limbs, extra fingers, missing fingers, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, 3d, 3d, 3d, 3d, realistic, realistic, realistic, realistic, realistic (remove and add “3d, realistic” keywords in negative prompt can change image style) ***** negative prompt I used for the sample image: lowres, disfigured, ostentatious, ugly, oversaturated, grain, low resolution, disfigured, blurry, bad anatomy, disfigured, poorly drawn face, mutant, mutated, extra limb, ugly, poorly drawn hands, missing limbs, blurred, floating limbs, disjointed limbs, deformed hands, blurred, out of focus, long neck, long body, ugly, disgusting, bad drawing, childish, cut off cropped, distorted, imperfect, surreal, bad hands, text, error, extra digit, fewer digits, cropped , worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, Lots of hands, extra limbs, extra fingers, conjoined fingers, deformed fingers, old, ugly eyes, imperfect eyes, skewed eyes , unnatural face, stiff face, stiff body, unbalanced body, unnatural body, lacking body, details are not clear, details are sticky, details are low, distorted details, ugly hands, imperfect hands, (mutated hands and fingers:1.5), (long body :1.3), (mutation, poorly drawn :1.2) bad hands, fused ha nd, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, ui, missing fingers ***** Note 1: “realistic, 3D, anime, pixar” is required in the prompt to create beautiful images, unless you want to explore something new. Note 2: negative prompt is extremely important, it accounts for 50% of the output of AI, so really pay attention to it, or if you are too lazy, you can take my negative prompt to use for most portrait images. Note 3: all the instructions I wrote above are not absolute, certainly not the best, if you just follow what I wrote, you will not be able to fully explore the capabilities of the DucHaitenAIart model. Explore, be creative, have a variety of styles and share the results together. That's really what I want. Some test: ![60C41D4F-A43F-4F54-A501-A01DD7EB5516.png](https://s3.amazonaws.com/moonup/production/uploads/1675760061284-630b58b279d18d5e53e3a5a9.png) ![B05192EE-AA9C-42AF-9969-9BFBFC6EA91E.png](https://s3.amazonaws.com/moonup/production/uploads/1675760063208-630b58b279d18d5e53e3a5a9.png) ![A5A6B464-5AA0-4FB3-8D9A-3D3C86F45046.png](https://s3.amazonaws.com/moonup/production/uploads/1675760067446-630b58b279d18d5e53e3a5a9.png) ![5B625E7F-F7BD-4CA9-AAFC-C808DF7DEA77.png](https://s3.amazonaws.com/moonup/production/uploads/1675760068791-630b58b279d18d5e53e3a5a9.png) ![60D9AF5E-90F5-4C5D-B8C8-E1894664CF7C.png](https://s3.amazonaws.com/moonup/production/uploads/1675760059510-630b58b279d18d5e53e3a5a9.png) ![D1E72E7D-7C19-4291-A70F-6E052BCEB652.png](https://s3.amazonaws.com/moonup/production/uploads/1675760061376-630b58b279d18d5e53e3a5a9.png) ![38BB9AF3-FE8D-4E9C-9B9E-95D6F2DCB4F5.png](https://s3.amazonaws.com/moonup/production/uploads/1675760065581-630b58b279d18d5e53e3a5a9.png) ![70BD8F21-E642-40FD-9E8F-A1C94475C495.png](https://s3.amazonaws.com/moonup/production/uploads/1675760067708-630b58b279d18d5e53e3a5a9.png) ![6F562A23-BE19-4829-8C22-E453B6D37769.png](https://s3.amazonaws.com/moonup/production/uploads/1675760063776-630b58b279d18d5e53e3a5a9.png) ![27CFF8A2-4C82-4E6F-9F9F-33C41AFC9EFA.png](https://s3.amazonaws.com/moonup/production/uploads/1675760068779-630b58b279d18d5e53e3a5a9.png) ![8680014D-202F-48D2-8861-51221C4C9DED.png](https://s3.amazonaws.com/moonup/production/uploads/1675760067045-630b58b279d18d5e53e3a5a9.png) ![85EA67C4-DF3B-4B2A-BA29-76B5A72AA23F.png](https://s3.amazonaws.com/moonup/production/uploads/1675760060031-630b58b279d18d5e53e3a5a9.png) ![06931620-F149-4201-9E10-6D34627DADF5.png](https://s3.amazonaws.com/moonup/production/uploads/1675760054742-630b58b279d18d5e53e3a5a9.png) ![0FA4F487-C337-44BE-AA8A-05A5CA7BA085.png](https://s3.amazonaws.com/moonup/production/uploads/1675760060033-630b58b279d18d5e53e3a5a9.png) ![409325E8-6B7C-4787-9239-658D711E9E30.png](https://s3.amazonaws.com/moonup/production/uploads/1675760060035-630b58b279d18d5e53e3a5a9.png) ![CAFCE0F7-3988-4FFC-84CA-DCD04A9014D1.png](https://s3.amazonaws.com/moonup/production/uploads/1675760061895-630b58b279d18d5e53e3a5a9.png) ![0A4E37F5-52DA-47B6-88D8-730B7F8DF8E5.png](https://s3.amazonaws.com/moonup/production/uploads/1675760065850-630b58b279d18d5e53e3a5a9.png) ![1F45EE77-5143-4190-A7FE-58DD89E19B88.png](https://s3.amazonaws.com/moonup/production/uploads/1675760057954-630b58b279d18d5e53e3a5a9.png) ![72DCA31F-2464-4956-B2E8-0843536C5DCA.png](https://s3.amazonaws.com/moonup/production/uploads/1675760064250-630b58b279d18d5e53e3a5a9.png) ![DAF4C0A9-C332-4885-87ED-8D700255422A.png](https://s3.amazonaws.com/moonup/production/uploads/1675760065896-630b58b279d18d5e53e3a5a9.png) ![1404354532-1213325309-Intricately detailed Full body, professional photograph, of (seductive royal vampire female), clothed, sitting, on chair, in lux - Copy.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758280-630b58b279d18d5e53e3a5a9.png) ![1404354535-2464576535-a beautiful portrait of a cute cyberpunk dog by sandra chevrier and greg rutkowski and wlop, purple blue color scheme, high key.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758289-630b58b279d18d5e53e3a5a9.png) ![1404354538-4099311588-beautiful portrait of cute jackalope in the middle of magical forrest at night, magic lights, sparkles, felt, felted, fuzzy, han.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758261-630b58b279d18d5e53e3a5a9.png) ![1404354539-903244661-masterpiece, best quality, best quality,Amazing,1girl,finely detail,Depth of field,extremely detailed CG unity 8k wallpaper, mas.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758273-630b58b279d18d5e53e3a5a9.png) ![1404354540-358593114-masterpiece, best quality, 1girl, super high level detail of cute lolita girl portrait, young, final fantasy, beautiful, goth, d.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758277-630b58b279d18d5e53e3a5a9.png) ![1404354543-3331273532-masterpiece, best quality, (full body), portrait, night city, 1girl, anime, 3D, Japan, pixar, realistic, teen girl, smiling, cut.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758279-630b58b279d18d5e53e3a5a9.png) ![1404354544-3331273533-masterpiece, best quality, (full body), portrait, night city, 1girl, anime, 3D, Japan, pixar, realistic, teen girl, smiling, cut.png](https://s3.amazonaws.com/moonup/production/uploads/1674645758268-630b58b279d18d5e53e3a5a9.png)
06abaf2b639c77f28038565e930edfe0