repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
fathyshalab/domain_transfer_general-massive_alarm-roberta-large-v1-5-50
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,508
false
# fathyshalab/domain_transfer_general-massive_alarm-roberta-large-v1-5-50 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_alarm-roberta-large-v1-5-50") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
979efe493b173b85730eba6d38cafe51
jayanta/convnext-large-224-22k-1k-FV2-finetuned-memes
jayanta
convnext
12
3
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,340
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-large-224-22k-1k-FV2-finetuned-memes This model is a fine-tuned version of [facebook/convnext-large-224-22k-1k](https://huggingface.co/facebook/convnext-large-224-22k-1k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4290 - Accuracy: 0.8663 - Precision: 0.8617 - Recall: 0.8663 - F1: 0.8629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8992 | 0.99 | 20 | 0.6455 | 0.7658 | 0.7512 | 0.7658 | 0.7534 | | 0.4245 | 1.99 | 40 | 0.4008 | 0.8539 | 0.8680 | 0.8539 | 0.8541 | | 0.2054 | 2.99 | 60 | 0.3245 | 0.8694 | 0.8631 | 0.8694 | 0.8650 | | 0.1102 | 3.99 | 80 | 0.3231 | 0.8671 | 0.8624 | 0.8671 | 0.8645 | | 0.0765 | 4.99 | 100 | 0.3882 | 0.8563 | 0.8603 | 0.8563 | 0.8556 | | 0.0642 | 5.99 | 120 | 0.4133 | 0.8601 | 0.8604 | 0.8601 | 0.8598 | | 0.0574 | 6.99 | 140 | 0.3889 | 0.8694 | 0.8657 | 0.8694 | 0.8667 | | 0.0526 | 7.99 | 160 | 0.4145 | 0.8655 | 0.8705 | 0.8655 | 0.8670 | | 0.0468 | 8.99 | 180 | 0.4256 | 0.8679 | 0.8642 | 0.8679 | 0.8650 | | 0.0472 | 9.99 | 200 | 0.4290 | 0.8663 | 0.8617 | 0.8663 | 0.8629 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.6.1.dev0 - Tokenizers 0.13.1
5320ea73632baf0e1bab3ced8c59b749
espnet/kan-bayashi_vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
espnet
null
19
0
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['en']
['vctk']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
1,858
false
## Example ESPnet2 TTS model ### `kan-bayashi/vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4037456/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3c9a41add58ecd35a5115fe70be67a22
apoorvumang/kgt5-base-wikikg90mv2
apoorvumang
t5
8
18
transformers
1
text2text-generation
true
true
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
5,866
false
This is a t5-base model (init from pretrained weights) and finetuned on WikiKG90Mv2 dataset. Please see https://github.com/apoorvumang/kgt5/ for more details on the method. This model was trained on the tail entity prediction task ie. given subject entity and relation, predict the object entity. Input should be provided in the form of "\<entity text\>| \<relation text\>". We used the raw text title and descriptions to get entity and relation textual representations. These raw texts were obtained from ogb dataset itself (dataset/wikikg90m-v2/mapping/entity.csv and relation.csv). Entity representation was set to the title, and description was used to disambiguate if 2 entities had the same title. If still no disambiguation was possible, we used the wikidata ID (eg. Q123456). We trained the model on WikiKG90Mv2 for approx 1.5 epochs on 4x1080Ti GPUs. The training time for 1 epoch was approx 5.5 days. To evaluate the model, we sample 300 times from the decoder for each input (s,r) pair. We then remove predictions which do not map back to a valid entity, and then rank the predictions by their log probabilities. Filtering was performed subsequently. **We achieve 0.239 validation MRR** (the full leaderboard is here https://ogb.stanford.edu/docs/lsc/leaderboards/#wikikg90mv2) You can try the following code in an ipython notebook to evaluate the pre-trained model. The full procedure of mapping entity to ids, filtering etc. is not included here for sake of simplicity but can be provided on request if needed. Please contact Apoorv (apoorvumang@gmail.com) for clarifications/details. --------- ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2") model = AutoModelForSeq2SeqLM.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2") ``` ``` import torch def getScores(ids, scores, pad_token_id): """get sequence scores from model.generate output""" scores = torch.stack(scores, dim=1) log_probs = torch.log_softmax(scores, dim=2) # remove start token ids = ids[:,1:] # gather needed probs x = ids.unsqueeze(-1).expand(log_probs.shape) needed_logits = torch.gather(log_probs, 2, x) final_logits = needed_logits[:, :, 0] padded_mask = (ids == pad_token_id) final_logits[padded_mask] = 0 final_scores = final_logits.sum(dim=-1) return final_scores.cpu().detach().numpy() def topkSample(input, model, tokenizer, num_samples=5, num_beams=1, max_output_length=30): tokenized = tokenizer(input, return_tensors="pt") out = model.generate(**tokenized, do_sample=True, num_return_sequences = num_samples, num_beams = num_beams, eos_token_id = tokenizer.eos_token_id, pad_token_id = tokenizer.pad_token_id, output_scores = True, return_dict_in_generate=True, max_length=max_output_length,) out_tokens = out.sequences out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True) out_scores = getScores(out_tokens, out.scores, tokenizer.pad_token_id) pair_list = [(x[0], x[1]) for x in zip(out_str, out_scores)] sorted_pair_list = sorted(pair_list, key=lambda x:x[1], reverse=True) return sorted_pair_list def greedyPredict(input, model, tokenizer): input_ids = tokenizer([input], return_tensors="pt").input_ids out_tokens = model.generate(input_ids) out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True) return out_str[0] ``` ``` # an example from validation set that the model predicts correctly # you can try your own examples here. what's your noble title? input = "Sophie Valdemarsdottir| noble title" out = topkSample(input, model, tokenizer, num_samples=5) out ``` You can further load the list of entity aliases, then filter only those predictions which are valid entities then create a reverse mapping from alias -> integer id to get final predictions in required format. However, loading these aliases in memory as a dictionary requires a lot of RAM + you need to download the aliases file (made available here https://storage.googleapis.com/kgt5-wikikg90mv2/ent_alias_list.pickle) (relation file: https://storage.googleapis.com/kgt5-wikikg90mv2/rel_alias_list.pickle) The submitted validation/test results for were obtained by sampling 300 times for each input, then applying above procedure, followed by filtering known entities. The final MRR can vary slightly due to this sampling nature (we found that although beam search gives deterministic output, the results are inferior to sampling large number of times). ``` # download valid.txt. you can also try same url with test.txt. however test does not contain the correct tails !wget https://storage.googleapis.com/kgt5-wikikg90mv2/valid.txt ``` ``` fname = 'valid.txt' valid_lines = [] f = open(fname) for line in f: valid_lines.append(line.rstrip()) f.close() print(valid_lines[0]) ``` ``` from tqdm.auto import tqdm # try unfiltered hits@k. this is approximation since model can sample same seq multiple times # you should run this on gpu if you want to evaluate on all points with 300 samples each k = 1 count_at_k = 0 max_predictions = k max_points = 1000 for line in tqdm(valid_lines[:max_points]): input, target = line.split('\t') model_output = topkSample(input, model, tokenizer, num_samples=max_predictions) prediction_strings = [x[0] for x in model_output] if target in prediction_strings: count_at_k += 1 print('Hits at {0} unfiltered: {1}'.format(k, count_at_k/max_points)) ```
49ff2094dc633e56ffb0460abd628b68
willcai/wav2vec2_common_voice_accents_indian
willcai
wav2vec2
11
6
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,492
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_common_voice_accents_indian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 48 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 384 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.5186 | 1.28 | 400 | 0.6937 | | 0.3485 | 2.56 | 800 | 0.2323 | | 0.2229 | 3.83 | 1200 | 0.2195 | | 0.1877 | 5.11 | 1600 | 0.2147 | | 0.1618 | 6.39 | 2000 | 0.2058 | | 0.1434 | 7.67 | 2400 | 0.2077 | | 0.132 | 8.95 | 2800 | 0.1995 | | 0.1223 | 10.22 | 3200 | 0.2146 | | 0.1153 | 11.5 | 3600 | 0.2117 | | 0.1061 | 12.78 | 4000 | 0.2071 | | 0.1003 | 14.06 | 4400 | 0.2219 | | 0.0949 | 15.34 | 4800 | 0.2204 | | 0.0889 | 16.61 | 5200 | 0.2162 | | 0.0824 | 17.89 | 5600 | 0.2243 | | 0.0784 | 19.17 | 6000 | 0.2323 | | 0.0702 | 20.45 | 6400 | 0.2325 | | 0.0665 | 21.73 | 6800 | 0.2334 | | 0.0626 | 23.0 | 7200 | 0.2411 | | 0.058 | 24.28 | 7600 | 0.2473 | | 0.054 | 25.56 | 8000 | 0.2591 | | 0.0506 | 26.84 | 8400 | 0.2577 | | 0.0484 | 28.12 | 8800 | 0.2633 | | 0.0453 | 29.39 | 9200 | 0.2692 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.4 - Tokenizers 0.11.6
ed4284e256d8a044b1ef7e9156a5fb7c
qisan/whisper-small-hi
qisan
whisper
15
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sv']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,271
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_tuned_whisper_cn This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5297 - eval_wer: 80.2457 - eval_runtime: 457.7207 - eval_samples_per_second: 2.311 - eval_steps_per_second: 0.291 - epoch: 2.02 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
f2adf3bc9fe75a5b39103e2f3ba1e6e6
Palak/distilroberta-base_squad
Palak
roberta
14
7
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,024
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base_squad This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the **squadV1** dataset. - "eval_exact_match": 80.97445600756859 - "eval_f1": 88.0153886332912 - "eval_samples": 10790 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.14.1 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.10.3
1c21a99a01e720f8fcecaae360f051eb
edgertej/poebert-clean-checkpoint-finetuned-poetry-foundation-clean
edgertej
bert
7
5
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,259
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # edgertej/poebert-clean-checkpoint-finetuned-poetry-foundation-clean This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8658 - Validation Loss: 3.6186 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.0379 | 3.6686 | 0 | | 3.9346 | 3.6478 | 1 | | 3.8658 | 3.6186 | 2 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.12.1
5fe1b58d2eda7d79c4c2301490065052
l3cube-pune/marathi-albert-v2
l3cube-pune
albert
8
7
transformers
1
fill-mask
true
false
false
cc-by-4.0
['mr']
['L3Cube-MahaCorpus']
null
0
0
0
0
0
0
0
[]
false
true
true
820
false
## MahaAlBERT MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159) ``` @InProceedings{joshi:2022:WILDRE6, author = {Joshi, Raviraj}, title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources}, booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {97--101} } ```
8a929dd9e76d8b646721dc99712d9082
OthmaneJ/distil-wav2vec2
OthmaneJ
wav2vec2
8
176
transformers
10
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['librispeech_asr']
null
0
0
0
0
0
0
0
['speech', 'audio', 'automatic-speech-recognition']
false
true
true
703
false
# Distil-wav2vec2 This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 45% times smaller and twice as fast as the original wav2vec2 base model. # Evaluation results This model achieves the following results (speed is mesured for a batch size of 64): |Model| Size| WER Librispeech-test-clean |WER Librispeech-test-other|Speed on cpu|speed on gpu| |----------| ------------- |-------------|-----------| ------|----| |Distil-wav2vec2| 197.9 Mb | 0.0983 | 0.2266|0.4006s| 0.0046s| |wav2vec2-base| 360 Mb | 0.0389 | 0.1047|0.4919s| 0.0082s| # Usage notebook (executes seamlessly on google colab) at https://github.com/OthmaneJ/distil-wav2vec2
cf51d909d2de0afc0f5445ca86dbe756
google/long-t5-local-base
google
longt5
8
6,288
transformers
6
text2text-generation
true
false
true
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,330
false
# LongT5 (local attention, base-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x). Disclaimer: The team releasing LongT5 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description LongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). LongT5 model is an extension of [T5 model](https://arxiv.org/pdf/1910.10683.pdf), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. LongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). ## Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you. ### How to use ```python from transformers import AutoTokenizer, LongT5Model tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base") model = LongT5Model.from_pretrained("google/long-t5-local-base") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{guo2021longt5, title={LongT5: Efficient Text-To-Text Transformer for Long Sequences}, author={Guo, Mandy and Ainslie, Joshua and Uthus, David and Ontanon, Santiago and Ni, Jianmo and Sung, Yun-Hsuan and Yang, Yinfei}, journal={arXiv preprint arXiv:2112.07916}, year={2021} } ```
f89a4cb036322033d5375ef6f8e0bca7
DOOGLAK/Tagged_One_100v4_NER_Model_3Epochs_AUGMENTED
DOOGLAK
bert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['tagged_one100v4_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,565
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_100v4_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4506 - Precision: 0.1649 - Recall: 0.0818 - F1: 0.1093 - Accuracy: 0.8299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 34 | 0.5649 | 0.0 | 0.0 | 0.0 | 0.7875 | | No log | 2.0 | 68 | 0.4687 | 0.1197 | 0.0400 | 0.0600 | 0.8147 | | No log | 3.0 | 102 | 0.4506 | 0.1649 | 0.0818 | 0.1093 | 0.8299 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
3c6a4928ad25f8b52eac48bf0ba93b41
emre/wav2vec-tr-lite-AG
emre
wav2vec2
11
9
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['tr']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech']
false
true
true
1,720
false
# wav2vec-tr-lite-AG ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG") model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG") resampler = torchaudio.transforms.Resample(48_000, 16_000) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00005 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4388 | 3.7 | 400 | 1.366 | 0.9701 | | 0.3766 | 7.4 | 800 | 0.4914 | 0.5374 | | 0.2295 | 11.11 | 1200 | 0.3934 | 0.4125 | | 0.1121 | 14.81 | 1600 | 0.3264 | 0.2904 | | 0.1473 | 18.51 | 2000 | 0.3103 | 0.2671 | | 0.1013 | 22.22 | 2400 | 0.2589 | 0.2324 | | 0.0704 | 25.92 | 2800 | 0.2826 | 0.2339 | | 0.0537 | 29.63 | 3200 | 0.2704 | 0.2309 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
ceee76164f5eb4ed0a288b17e4553d30
anas-awadalla/roberta-base-few-shot-k-256-finetuned-squad-seed-0
anas-awadalla
roberta
17
5
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
983
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-256-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
7d3da78a3f1be0a6eee5ee2e3a7e038f
huak95/mt-align-finetuned-LST-en-to-th
huak95
marian
11
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,205
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt-align-finetuned-LST-en-to-th This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 77 | 1.6042 | 13.1732 | 26.144 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
0f159bb95af26a5c6e0e8777ef8ef779
domenicrosati/t5-finetuned-parasci
domenicrosati
t5
27
3
transformers
1
summarization
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
1,043
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-finetuned-parasci This model is a fine-tuned version of [domenicrosati/t5-finetuned-parasci](https://huggingface.co/domenicrosati/t5-finetuned-parasci) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0845 - Bleu: 19.5623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
1a5e0e51f4ef65ccfe90d641b566ddbd
bgilb5/whisper-es-en-3
bgilb5
whisper
12
4
transformers
0
automatic-speech-recognition
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
true
true
true
11,411
false
# Whisper [OpenAI's Whisper](https://openai.com/blog/whisper/) The Whisper model was proposed in [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original model card. ## Intro The first paragraphs of the abstract read as follows : > We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. > When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing. The original code repository can be found [here](https://github.com/openai/whisper). ## Model details The Whisper models are trained for speech recognition and translation tasks, capable of transcribing speech audio into the text in the language it is spoken (ASR) as well as translated into English (speech translation). Researchers at OpenAI developed the models to study the robustness of speech processing systems trained under large-scale weak supervision. There are 9 models of different sizes and capabilities, summarised in the following table. | Size | Parameters | English-only model | Multilingual model | |:------:|:----------:|:------------------:|:------------------:| | tiny | 39 M | ✓ | ✓ | | base | 74 M | ✓ | ✓ | | small | 244 M | ✓ | ✓ | | medium | 769 M | ✓ | ✓ | | large | 1550 M | | ✓ | ## Model description Whisper is an auto-regressive automatic speech recognition encoder-decoder model that was trained on 680 000 hours of 16kHz sampled multilingual audio. It was fully trained in a supervised manner, with multiple tasks : - English transcription - Any-to-English speech translation - Non-English transcription - No speech prediction To each task corresponds a sequence of tokens that are given to the decoder as *context tokens*. The beginning of a transcription always starts with `<|startoftranscript|>` which is why the `decoder_start_token` is always set to `tokenizer.encode("<|startoftranscript|>")`. The following token should be the language token, which is automatically detected in the original code. Finally, the task is define using either `<|transcribe|>` or `<|translate|>`. In addition, a `<|notimestamps|>` token is added if the task does not include timestamp prediction. # Usage To transcribe or translate audio files, the model has to be used along a `WhisperProcessor`. The `WhisperProcessor.get_decoder_prompt_ids` function is used to get a list of `( idx, token )` tuples, which can either be set in the config, or directly passed to the generate function, as `forced_decoder_ids`. ## Transcription In the following example, the english only model is used. We set the `decoder_input_ids` accordingly. ### English to english The "<|en|>" token is used to specify that the speech is in english and should be transcribed to english ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>> import torch >>> # load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small.en") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small.en") >>> # load dummy dataset and read soundfiles >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features >>> # Generate logits >>> logits = model(input_features, decoder_input_ids = torch.tensor([[50258]])).logits >>> # take argmax and decode >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|>'] ``` ## Evaluation This code snippet shows how to evaluate **openai/whisper-small.en** on LibriSpeech's "clean" and "other" test data. ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import soundfile as sf >>> import torch >>> from evaluate import load >>> librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small.en").to("cuda") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small.en") >>> def map_to_pred(batch): >>> input_features = processor(batch["audio"]["array"], return_tensors="pt").input_features >>> with torch.no_grad(): >>> logits = model(input_features.to("cuda")).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids, normalize = True) >>> batch['text'] = processor.tokenizer._normalize(batch['text']) >>> batch["transcription"] = transcription >>> return batch >>> result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) >>> wer = load("wer") >>> print(wer.compute(predictions=ds["text"], references=ds["transcription"])) 0.07639504403417127 ``` ### Evaluated Use The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research. The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them. In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes. ## Training Data The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language. ## Performance and Limitations Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level. However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself. Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf). In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages. ## Broader Implications We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications. There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects. ### BibTeX entry and citation info *Since no official citation was provided, we use the following in the mean time* ```bibtex @misc{radford2022whisper, title={Robust Speech Recognition via Large-Scale Weak Supervision.}, author={Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever}, year={2022}, url={https://cdn.openai.com/papers/whisper.pdf}, } ```
3cebd12dd8b312b5fc2c5aa4a3665375
paola-md/distilr2-lr2e05-wd0.1-bs32
paola-md
roberta
6
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,674
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilr2-lr2e05-wd0.1-bs32 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2809 - Rmse: 0.5300 - Mse: 0.2809 - Mae: 0.4214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2771 | 1.0 | 623 | 0.2730 | 0.5224 | 0.2730 | 0.4164 | | 0.2732 | 2.0 | 1246 | 0.2731 | 0.5226 | 0.2731 | 0.4156 | | 0.271 | 3.0 | 1869 | 0.2791 | 0.5283 | 0.2791 | 0.4308 | | 0.2681 | 4.0 | 2492 | 0.2751 | 0.5245 | 0.2751 | 0.4004 | | 0.2648 | 5.0 | 3115 | 0.2795 | 0.5286 | 0.2795 | 0.4238 | | 0.2606 | 6.0 | 3738 | 0.2809 | 0.5300 | 0.2809 | 0.4214 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
e6c74f59e5e2a4c6d8fe4327a0451ab1
jogonba2/mbarthez-copy_mechanism-hal_articles
jogonba2
mbart
14
3
transformers
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,671
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbarthez-davide_articles-copy_enhanced This model is a fine-tuned version of [moussaKam/mbarthez](https://huggingface.co/moussaKam/mbarthez) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4905 - Rouge1: 36.548 - Rouge2: 19.6282 - Rougel: 30.2513 - Rougelsum: 30.2765 - Gen Len: 25.7238 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.6706 | 1.0 | 33552 | 1.5690 | 31.2477 | 16.5455 | 26.9855 | 26.9754 | 18.6217 | | 1.3446 | 2.0 | 67104 | 1.5060 | 32.1108 | 17.1408 | 27.7833 | 27.7703 | 18.9115 | | 1.3245 | 3.0 | 100656 | 1.4905 | 32.9084 | 17.7027 | 28.2912 | 28.2975 | 18.9801 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1+cu110 - Datasets 1.11.0 - Tokenizers 0.10.3
e02af6524df3ead150b731aeddd37ad1
ZZDDBBCC/distilbert-base-uncased-finetuned-cola
ZZDDBBCC
distilbert
13
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8631 - Matthews Correlation: 0.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5249 | 1.0 | 535 | 0.5300 | 0.4152 | | 0.3489 | 2.0 | 1070 | 0.5238 | 0.4940 | | 0.2329 | 3.0 | 1605 | 0.6447 | 0.5162 | | 0.1692 | 4.0 | 2140 | 0.7805 | 0.5332 | | 0.1256 | 5.0 | 2675 | 0.8631 | 0.5411 | ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
9d4c4a5fcc8766068dc442c659549b45
bhadi26/hadi-rebecca-test-model-public
bhadi26
null
2
0
null
0
null
false
false
false
mit
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert']
false
true
true
8,979
false
# RoBERTa base model Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=roberta) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='roberta-base') >>> unmasker("Hello I'm a <mask> model.") [{'sequence': "<s>Hello I'm a male model.</s>", 'score': 0.3306540250778198, 'token': 2943, 'token_str': 'Ġmale'}, {'sequence': "<s>Hello I'm a female model.</s>", 'score': 0.04655390977859497, 'token': 2182, 'token_str': 'Ġfemale'}, {'sequence': "<s>Hello I'm a professional model.</s>", 'score': 0.04232972860336304, 'token': 2038, 'token_str': 'Ġprofessional'}, {'sequence': "<s>Hello I'm a fashion model.</s>", 'score': 0.037216778844594955, 'token': 2734, 'token_str': 'Ġfashion'}, {'sequence': "<s>Hello I'm a Russian model.</s>", 'score': 0.03253649175167084, 'token': 1083, 'token_str': 'ĠRussian'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaModel.from_pretrained('roberta-base') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = TFRobertaModel.from_pretrained('roberta-base') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='roberta-base') >>> unmasker("The man worked as a <mask>.") [{'sequence': '<s>The man worked as a mechanic.</s>', 'score': 0.08702439814805984, 'token': 25682, 'token_str': 'Ġmechanic'}, {'sequence': '<s>The man worked as a waiter.</s>', 'score': 0.0819653645157814, 'token': 38233, 'token_str': 'Ġwaiter'}, {'sequence': '<s>The man worked as a butcher.</s>', 'score': 0.073323555290699, 'token': 32364, 'token_str': 'Ġbutcher'}, {'sequence': '<s>The man worked as a miner.</s>', 'score': 0.046322137117385864, 'token': 18678, 'token_str': 'Ġminer'}, {'sequence': '<s>The man worked as a guard.</s>', 'score': 0.040150221437215805, 'token': 2510, 'token_str': 'Ġguard'}] >>> unmasker("The Black woman worked as a <mask>.") [{'sequence': '<s>The Black woman worked as a waitress.</s>', 'score': 0.22177888453006744, 'token': 35698, 'token_str': 'Ġwaitress'}, {'sequence': '<s>The Black woman worked as a prostitute.</s>', 'score': 0.19288744032382965, 'token': 36289, 'token_str': 'Ġprostitute'}, {'sequence': '<s>The Black woman worked as a maid.</s>', 'score': 0.06498628109693527, 'token': 29754, 'token_str': 'Ġmaid'}, {'sequence': '<s>The Black woman worked as a secretary.</s>', 'score': 0.05375480651855469, 'token': 2971, 'token_str': 'Ġsecretary'}, {'sequence': '<s>The Black woman worked as a nurse.</s>', 'score': 0.05245552211999893, 'token': 9008, 'token_str': 'Ġnurse'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The RoBERTa model was pretrained on the reunion of five datasets: - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books; - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ; - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019. - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2, - [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. Together theses datasets weight 160GB of text. ## Training procedure ### Preprocessing The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with `<s>` and the end of one by `</s>` The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `<mask>`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). ### Pretraining The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The optimizer used is Adam with a learning rate of 6e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning rate after. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:| | | 87.6 | 91.9 | 92.8 | 94.8 | 63.6 | 91.2 | 90.2 | 78.7 | ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1907-11692, author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov}, title = {RoBERTa: {A} Robustly Optimized {BERT} Pretraining Approach}, journal = {CoRR}, volume = {abs/1907.11692}, year = {2019}, url = {http://arxiv.org/abs/1907.11692}, archivePrefix = {arXiv}, eprint = {1907.11692}, timestamp = {Thu, 01 Aug 2019 08:59:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1907-11692.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=roberta-base"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
c17c9fb35eee81afe363e6d700d96479
google/multiberts-seed_1-step_120k
google
bert
8
12
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_120k']
false
true
true
3,521
false
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 120k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 120k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_120k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_120k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
28049af202c90ccf82acba2090c5c06f
stanfordnlp/stanza-pcm
stanfordnlp
null
7
1
stanza
0
token-classification
false
false
false
apache-2.0
['pcm']
null
null
0
0
0
0
0
0
0
['stanza', 'token-classification']
false
true
true
579
false
# Stanza model for Naija (pcm) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:53:44.595
2471a5ac69eb857d614ddcb604fa99e2
shishirAI/wav2vec2-xlsr-nepali
shishirAI
wav2vec2
14
5
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,050
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-nepali This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.12.1
26f77cd03cf46e8a812252eb78c6ccef
Helsinki-NLP/opus-mt-en-zle
Helsinki-NLP
marian
11
13
transformers
0
translation
true
true
false
apache-2.0
['en', 'be', 'ru', 'uk', 'zle']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,847
false
### eng-zle * source group: English * target group: East Slavic languages * OPUS readme: [eng-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md) * model: transformer * source language(s): eng * target language(s): bel bel_Latn orv_Cyrl rue rus ukr * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip) * test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt) * test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newstest2012-engrus.eng.rus | 27.4 | 0.550 | | newstest2013-engrus.eng.rus | 21.4 | 0.493 | | newstest2015-enru-engrus.eng.rus | 24.2 | 0.534 | | newstest2016-enru-engrus.eng.rus | 23.3 | 0.518 | | newstest2017-enru-engrus.eng.rus | 25.3 | 0.541 | | newstest2018-enru-engrus.eng.rus | 22.4 | 0.527 | | newstest2019-enru-engrus.eng.rus | 24.1 | 0.505 | | Tatoeba-test.eng-bel.eng.bel | 20.8 | 0.471 | | Tatoeba-test.eng.multi | 37.2 | 0.580 | | Tatoeba-test.eng-orv.eng.orv | 0.6 | 0.130 | | Tatoeba-test.eng-rue.eng.rue | 1.4 | 0.168 | | Tatoeba-test.eng-rus.eng.rus | 41.3 | 0.616 | | Tatoeba-test.eng-ukr.eng.ukr | 38.7 | 0.596 | ### System Info: - hf_name: eng-zle - source_languages: eng - target_languages: zle - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'be', 'ru', 'uk', 'zle'] - src_constituents: {'eng'} - tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt - src_alpha3: eng - tgt_alpha3: zle - short_pair: en-zle - chrF2_score: 0.58 - bleu: 37.2 - brevity_penalty: 0.9890000000000001 - ref_len: 63493.0 - src_name: English - tgt_name: East Slavic languages - train_date: 2020-08-02 - src_alpha2: en - tgt_alpha2: zle - prefer_old: False - long_pair: eng-zle - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
c372832a5f68d433cf848d9a34617544
bnunticha/t5-small-en-to-th
bnunticha
t5
11
4
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,248
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-en-to-th This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0527 - Bleu: 0.0 - Gen Len: 17.5726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:| | 0.0414 | 1.0 | 17810 | 0.0527 | 0.0 | 17.5726 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
a6279ae4376c3a3410c2ed86c8330c03
Shobhank-iiitdwd/RoBERTa-base-squad2-QA
Shobhank-iiitdwd
roberta
12
19
transformers
0
question-answering
true
true
true
cc-by-4.0
['en']
['squad_v2']
null
0
0
0
0
0
0
0
[]
true
true
true
1,813
false
# roberta-base for QA This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. ## Overview **Language model:** roberta-base **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Infrastructure**: 4x Tesla v100 ## Hyperparameters ``` batch_size = 96 n_epochs = 2 base_LM_model = "roberta-base" max_seq_len = 386 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "Shobhank-iiitdwd/RoBERTa-base-squad2-QA" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 79.87029394424324, "f1": 82.91251169582613, "total": 11873, "HasAns_exact": 77.93522267206478, "HasAns_f1": 84.02838248389763, "HasAns_total": 5928, "NoAns_exact": 81.79983179142137, "NoAns_f1": 81.79983179142137, "NoAns_total": 5945 ```
d6e60f26ba79ee5e762e2fc15c03ba16
sd-concepts-library/gba-fe-class-cards
sd-concepts-library
null
492
0
null
2
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
57,360
false
### GBA FE Class Cards on Stable Diffusion This is the `classcard` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![classcard 0](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/146.jpeg) ![classcard 1](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/40.jpeg) ![classcard 2](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/246.jpeg) ![classcard 3](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/182.jpeg) ![classcard 4](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/1.jpeg) ![classcard 5](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/13.jpeg) ![classcard 6](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/253.jpeg) ![classcard 7](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/12.jpeg) ![classcard 8](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/448.jpeg) ![classcard 9](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/377.jpeg) ![classcard 10](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/31.jpeg) ![classcard 11](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/405.jpeg) ![classcard 12](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/37.jpeg) ![classcard 13](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/300.jpeg) ![classcard 14](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/280.jpeg) ![classcard 15](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/462.jpeg) ![classcard 16](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/339.jpeg) ![classcard 17](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/173.jpeg) ![classcard 18](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/344.jpeg) ![classcard 19](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/170.jpeg) ![classcard 20](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/149.jpeg) ![classcard 21](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/335.jpeg) ![classcard 22](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/249.jpeg) ![classcard 23](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/420.jpeg) ![classcard 24](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/274.jpeg) ![classcard 25](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/127.jpeg) ![classcard 26](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/268.jpeg) ![classcard 27](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/43.jpeg) ![classcard 28](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/345.jpeg) ![classcard 29](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/456.jpeg) ![classcard 30](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/360.jpeg) ![classcard 31](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/310.jpeg) ![classcard 32](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/68.jpeg) ![classcard 33](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/480.jpeg) ![classcard 34](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/450.jpeg) ![classcard 35](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/258.jpeg) ![classcard 36](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/74.jpeg) ![classcard 37](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/394.jpeg) ![classcard 38](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/157.jpeg) ![classcard 39](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/114.jpeg) ![classcard 40](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/356.jpeg) ![classcard 41](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/48.jpeg) ![classcard 42](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/376.jpeg) ![classcard 43](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/374.jpeg) ![classcard 44](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/230.jpeg) ![classcard 45](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/160.jpeg) ![classcard 46](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/364.jpeg) ![classcard 47](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/76.jpeg) ![classcard 48](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/333.jpeg) ![classcard 49](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/476.jpeg) ![classcard 50](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/164.jpeg) ![classcard 51](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/216.jpeg) ![classcard 52](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/87.jpeg) ![classcard 53](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/18.jpeg) ![classcard 54](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/304.jpeg) ![classcard 55](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/282.jpeg) ![classcard 56](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/286.jpeg) ![classcard 57](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/45.jpeg) ![classcard 58](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/208.jpeg) ![classcard 59](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/441.jpeg) ![classcard 60](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/204.jpeg) ![classcard 61](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/95.jpeg) ![classcard 62](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/135.jpeg) ![classcard 63](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/464.jpeg) ![classcard 64](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/144.jpeg) ![classcard 65](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/390.jpeg) ![classcard 66](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/140.jpeg) ![classcard 67](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/166.jpeg) ![classcard 68](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/237.jpeg) ![classcard 69](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/199.jpeg) ![classcard 70](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/459.jpeg) ![classcard 71](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/219.jpeg) ![classcard 72](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/402.jpeg) ![classcard 73](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/440.jpeg) ![classcard 74](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/454.jpeg) ![classcard 75](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/185.jpeg) ![classcard 76](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/28.jpeg) ![classcard 77](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/453.jpeg) ![classcard 78](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/83.jpeg) ![classcard 79](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/129.jpeg) ![classcard 80](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/380.jpeg) ![classcard 81](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/54.jpeg) ![classcard 82](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/254.jpeg) ![classcard 83](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/366.jpeg) ![classcard 84](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/278.jpeg) ![classcard 85](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/461.jpeg) ![classcard 86](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/8.jpeg) ![classcard 87](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/365.jpeg) ![classcard 88](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/197.jpeg) ![classcard 89](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/159.jpeg) ![classcard 90](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/338.jpeg) ![classcard 91](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/429.jpeg) ![classcard 92](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/293.jpeg) ![classcard 93](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/428.jpeg) ![classcard 94](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/60.jpeg) ![classcard 95](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/470.jpeg) ![classcard 96](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/473.jpeg) ![classcard 97](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/194.jpeg) ![classcard 98](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/23.jpeg) ![classcard 99](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/112.jpeg) ![classcard 100](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/396.jpeg) ![classcard 101](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/235.jpeg) ![classcard 102](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/169.jpeg) ![classcard 103](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/321.jpeg) ![classcard 104](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/260.jpeg) ![classcard 105](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/103.jpeg) ![classcard 106](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/151.jpeg) ![classcard 107](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/34.jpeg) ![classcard 108](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/325.jpeg) ![classcard 109](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/410.jpeg) ![classcard 110](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/51.jpeg) ![classcard 111](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/236.jpeg) ![classcard 112](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/250.jpeg) ![classcard 113](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/257.jpeg) ![classcard 114](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/433.jpeg) ![classcard 115](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/384.jpeg) ![classcard 116](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/133.jpeg) ![classcard 117](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/279.jpeg) ![classcard 118](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/115.jpeg) ![classcard 119](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/41.jpeg) ![classcard 120](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/288.jpeg) ![classcard 121](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/154.jpeg) ![classcard 122](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/190.jpeg) ![classcard 123](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/305.jpeg) ![classcard 124](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/116.jpeg) ![classcard 125](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/468.jpeg) ![classcard 126](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/17.jpeg) ![classcard 127](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/223.jpeg) ![classcard 128](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/446.jpeg) ![classcard 129](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/232.jpeg) ![classcard 130](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/172.jpeg) ![classcard 131](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/407.jpeg) ![classcard 132](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/225.jpeg) ![classcard 133](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/57.jpeg) ![classcard 134](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/77.jpeg) ![classcard 135](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/66.jpeg) ![classcard 136](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/65.jpeg) ![classcard 137](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/49.jpeg) ![classcard 138](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/121.jpeg) ![classcard 139](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/379.jpeg) ![classcard 140](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/466.jpeg) ![classcard 141](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/382.jpeg) ![classcard 142](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/213.jpeg) ![classcard 143](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/9.jpeg) ![classcard 144](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/202.jpeg) ![classcard 145](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/210.jpeg) ![classcard 146](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/316.jpeg) ![classcard 147](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/359.jpeg) ![classcard 148](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/419.jpeg) ![classcard 149](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/207.jpeg) ![classcard 150](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/266.jpeg) ![classcard 151](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/399.jpeg) ![classcard 152](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/416.jpeg) ![classcard 153](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/413.jpeg) ![classcard 154](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/171.jpeg) ![classcard 155](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/181.jpeg) ![classcard 156](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/78.jpeg) ![classcard 157](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/58.jpeg) ![classcard 158](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/123.jpeg) ![classcard 159](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/153.jpeg) ![classcard 160](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/52.jpeg) ![classcard 161](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/389.jpeg) ![classcard 162](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/64.jpeg) ![classcard 163](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/163.jpeg) ![classcard 164](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/85.jpeg) ![classcard 165](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/392.jpeg) ![classcard 166](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/334.jpeg) ![classcard 167](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/30.jpeg) ![classcard 168](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/451.jpeg) ![classcard 169](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/73.jpeg) ![classcard 170](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/343.jpeg) ![classcard 171](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/152.jpeg) ![classcard 172](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/2.jpeg) ![classcard 173](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/137.jpeg) ![classcard 174](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/36.jpeg) ![classcard 175](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/486.jpeg) ![classcard 176](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/352.jpeg) ![classcard 177](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/270.jpeg) ![classcard 178](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/224.jpeg) ![classcard 179](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/307.jpeg) ![classcard 180](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/245.jpeg) ![classcard 181](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/263.jpeg) ![classcard 182](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/403.jpeg) ![classcard 183](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/414.jpeg) ![classcard 184](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/261.jpeg) ![classcard 185](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/427.jpeg) ![classcard 186](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/145.jpeg) ![classcard 187](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/67.jpeg) ![classcard 188](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/341.jpeg) ![classcard 189](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/330.jpeg) ![classcard 190](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/436.jpeg) ![classcard 191](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/362.jpeg) ![classcard 192](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/291.jpeg) ![classcard 193](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/474.jpeg) ![classcard 194](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/342.jpeg) ![classcard 195](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/108.jpeg) ![classcard 196](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/0.jpeg) ![classcard 197](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/55.jpeg) ![classcard 198](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/29.jpeg) ![classcard 199](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/14.jpeg) ![classcard 200](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/25.jpeg) ![classcard 201](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/432.jpeg) ![classcard 202](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/94.jpeg) ![classcard 203](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/50.jpeg) ![classcard 204](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/417.jpeg) ![classcard 205](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/431.jpeg) ![classcard 206](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/148.jpeg) ![classcard 207](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/158.jpeg) ![classcard 208](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/469.jpeg) ![classcard 209](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/277.jpeg) ![classcard 210](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/244.jpeg) ![classcard 211](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/294.jpeg) ![classcard 212](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/458.jpeg) ![classcard 213](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/422.jpeg) ![classcard 214](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/251.jpeg) ![classcard 215](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/147.jpeg) ![classcard 216](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/122.jpeg) ![classcard 217](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/275.jpeg) ![classcard 218](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/92.jpeg) ![classcard 219](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/239.jpeg) ![classcard 220](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/332.jpeg) ![classcard 221](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/104.jpeg) ![classcard 222](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/177.jpeg) ![classcard 223](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/175.jpeg) ![classcard 224](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/368.jpeg) ![classcard 225](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/39.jpeg) ![classcard 226](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/4.jpeg) ![classcard 227](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/61.jpeg) ![classcard 228](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/228.jpeg) ![classcard 229](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/176.jpeg) ![classcard 230](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/227.jpeg) ![classcard 231](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/240.jpeg) ![classcard 232](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/385.jpeg) ![classcard 233](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/222.jpeg) ![classcard 234](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/255.jpeg) ![classcard 235](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/238.jpeg) ![classcard 236](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/292.jpeg) ![classcard 237](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/452.jpeg) ![classcard 238](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/162.jpeg) ![classcard 239](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/284.jpeg) ![classcard 240](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/231.jpeg) ![classcard 241](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/259.jpeg) ![classcard 242](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/435.jpeg) ![classcard 243](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/273.jpeg) ![classcard 244](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/361.jpeg) ![classcard 245](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/337.jpeg) ![classcard 246](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/98.jpeg) ![classcard 247](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/10.jpeg) ![classcard 248](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/132.jpeg) ![classcard 249](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/124.jpeg) ![classcard 250](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/370.jpeg) ![classcard 251](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/156.jpeg) ![classcard 252](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/113.jpeg) ![classcard 253](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/439.jpeg) ![classcard 254](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/6.jpeg) ![classcard 255](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/324.jpeg) ![classcard 256](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/404.jpeg) ![classcard 257](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/478.jpeg) ![classcard 258](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/93.jpeg) ![classcard 259](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/192.jpeg) ![classcard 260](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/408.jpeg) ![classcard 261](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/100.jpeg) ![classcard 262](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/386.jpeg) ![classcard 263](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/375.jpeg) ![classcard 264](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/465.jpeg) ![classcard 265](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/393.jpeg) ![classcard 266](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/206.jpeg) ![classcard 267](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/303.jpeg) ![classcard 268](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/24.jpeg) ![classcard 269](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/445.jpeg) ![classcard 270](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/319.jpeg) ![classcard 271](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/11.jpeg) ![classcard 272](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/90.jpeg) ![classcard 273](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/179.jpeg) ![classcard 274](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/80.jpeg) ![classcard 275](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/449.jpeg) ![classcard 276](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/119.jpeg) ![classcard 277](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/318.jpeg) ![classcard 278](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/308.jpeg) ![classcard 279](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/320.jpeg) ![classcard 280](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/110.jpeg) ![classcard 281](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/214.jpeg) ![classcard 282](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/264.jpeg) ![classcard 283](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/328.jpeg) ![classcard 284](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/471.jpeg) ![classcard 285](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/387.jpeg) ![classcard 286](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/32.jpeg) ![classcard 287](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/21.jpeg) ![classcard 288](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/353.jpeg) ![classcard 289](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/460.jpeg) ![classcard 290](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/301.jpeg) ![classcard 291](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/444.jpeg) ![classcard 292](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/53.jpeg) ![classcard 293](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/400.jpeg) ![classcard 294](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/421.jpeg) ![classcard 295](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/42.jpeg) ![classcard 296](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/69.jpeg) ![classcard 297](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/242.jpeg) ![classcard 298](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/322.jpeg) ![classcard 299](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/89.jpeg) ![classcard 300](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/309.jpeg) ![classcard 301](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/5.jpeg) ![classcard 302](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/388.jpeg) ![classcard 303](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/475.jpeg) ![classcard 304](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/82.jpeg) ![classcard 305](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/272.jpeg) ![classcard 306](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/327.jpeg) ![classcard 307](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/59.jpeg) ![classcard 308](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/479.jpeg) ![classcard 309](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/296.jpeg) ![classcard 310](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/62.jpeg) ![classcard 311](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/424.jpeg) ![classcard 312](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/281.jpeg) ![classcard 313](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/351.jpeg) ![classcard 314](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/434.jpeg) ![classcard 315](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/283.jpeg) ![classcard 316](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/3.jpeg) ![classcard 317](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/269.jpeg) ![classcard 318](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/276.jpeg) ![classcard 319](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/120.jpeg) ![classcard 320](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/189.jpeg) ![classcard 321](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/128.jpeg) ![classcard 322](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/81.jpeg) ![classcard 323](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/150.jpeg) ![classcard 324](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/406.jpeg) ![classcard 325](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/395.jpeg) ![classcard 326](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/99.jpeg) ![classcard 327](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/302.jpeg) ![classcard 328](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/346.jpeg) ![classcard 329](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/63.jpeg) ![classcard 330](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/72.jpeg) ![classcard 331](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/109.jpeg) ![classcard 332](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/200.jpeg) ![classcard 333](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/96.jpeg) ![classcard 334](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/285.jpeg) ![classcard 335](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/323.jpeg) ![classcard 336](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/56.jpeg) ![classcard 337](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/118.jpeg) ![classcard 338](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/209.jpeg) ![classcard 339](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/252.jpeg) ![classcard 340](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/155.jpeg) ![classcard 341](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/168.jpeg) ![classcard 342](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/136.jpeg) ![classcard 343](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/467.jpeg) ![classcard 344](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/193.jpeg) ![classcard 345](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/243.jpeg) ![classcard 346](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/306.jpeg) ![classcard 347](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/398.jpeg) ![classcard 348](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/26.jpeg) ![classcard 349](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/381.jpeg) ![classcard 350](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/298.jpeg) ![classcard 351](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/167.jpeg) ![classcard 352](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/233.jpeg) ![classcard 353](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/331.jpeg) ![classcard 354](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/447.jpeg) ![classcard 355](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/415.jpeg) ![classcard 356](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/226.jpeg) ![classcard 357](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/455.jpeg) ![classcard 358](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/97.jpeg) ![classcard 359](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/358.jpeg) ![classcard 360](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/19.jpeg) ![classcard 361](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/105.jpeg) ![classcard 362](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/383.jpeg) ![classcard 363](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/125.jpeg) ![classcard 364](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/131.jpeg) ![classcard 365](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/130.jpeg) ![classcard 366](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/256.jpeg) ![classcard 367](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/47.jpeg) ![classcard 368](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/212.jpeg) ![classcard 369](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/347.jpeg) ![classcard 370](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/71.jpeg) ![classcard 371](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/165.jpeg) ![classcard 372](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/482.jpeg) ![classcard 373](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/191.jpeg) ![classcard 374](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/314.jpeg) ![classcard 375](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/348.jpeg) ![classcard 376](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/20.jpeg) ![classcard 377](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/139.jpeg) ![classcard 378](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/184.jpeg) ![classcard 379](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/161.jpeg) ![classcard 380](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/290.jpeg) ![classcard 381](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/15.jpeg) ![classcard 382](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/326.jpeg) ![classcard 383](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/312.jpeg) ![classcard 384](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/262.jpeg) ![classcard 385](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/483.jpeg) ![classcard 386](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/203.jpeg) ![classcard 387](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/371.jpeg) ![classcard 388](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/215.jpeg) ![classcard 389](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/315.jpeg) ![classcard 390](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/442.jpeg) ![classcard 391](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/16.jpeg) ![classcard 392](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/217.jpeg) ![classcard 393](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/44.jpeg) ![classcard 394](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/33.jpeg) ![classcard 395](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/117.jpeg) ![classcard 396](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/220.jpeg) ![classcard 397](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/425.jpeg) ![classcard 398](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/38.jpeg) ![classcard 399](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/248.jpeg) ![classcard 400](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/357.jpeg) ![classcard 401](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/443.jpeg) ![classcard 402](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/317.jpeg) ![classcard 403](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/485.jpeg) ![classcard 404](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/22.jpeg) ![classcard 405](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/295.jpeg) ![classcard 406](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/423.jpeg) ![classcard 407](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/106.jpeg) ![classcard 408](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/329.jpeg) ![classcard 409](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/340.jpeg) ![classcard 410](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/195.jpeg) ![classcard 411](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/349.jpeg) ![classcard 412](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/336.jpeg) ![classcard 413](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/201.jpeg) ![classcard 414](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/289.jpeg) ![classcard 415](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/378.jpeg) ![classcard 416](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/373.jpeg) ![classcard 417](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/86.jpeg) ![classcard 418](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/198.jpeg) ![classcard 419](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/174.jpeg) ![classcard 420](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/188.jpeg) ![classcard 421](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/412.jpeg) ![classcard 422](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/430.jpeg) ![classcard 423](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/311.jpeg) ![classcard 424](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/355.jpeg) ![classcard 425](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/363.jpeg) ![classcard 426](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/211.jpeg) ![classcard 427](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/88.jpeg) ![classcard 428](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/484.jpeg) ![classcard 429](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/265.jpeg) ![classcard 430](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/354.jpeg) ![classcard 431](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/107.jpeg) ![classcard 432](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/75.jpeg) ![classcard 433](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/79.jpeg) ![classcard 434](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/372.jpeg) ![classcard 435](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/221.jpeg) ![classcard 436](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/472.jpeg) ![classcard 437](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/141.jpeg) ![classcard 438](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/297.jpeg) ![classcard 439](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/267.jpeg) ![classcard 440](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/369.jpeg) ![classcard 441](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/401.jpeg) ![classcard 442](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/247.jpeg) ![classcard 443](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/27.jpeg) ![classcard 444](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/70.jpeg) ![classcard 445](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/91.jpeg) ![classcard 446](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/218.jpeg) ![classcard 447](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/411.jpeg) ![classcard 448](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/234.jpeg) ![classcard 449](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/142.jpeg) ![classcard 450](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/180.jpeg) ![classcard 451](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/299.jpeg) ![classcard 452](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/205.jpeg) ![classcard 453](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/457.jpeg) ![classcard 454](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/287.jpeg) ![classcard 455](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/350.jpeg) ![classcard 456](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/134.jpeg) ![classcard 457](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/391.jpeg) ![classcard 458](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/186.jpeg) ![classcard 459](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/437.jpeg) ![classcard 460](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/477.jpeg) ![classcard 461](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/187.jpeg) ![classcard 462](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/84.jpeg) ![classcard 463](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/196.jpeg) ![classcard 464](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/7.jpeg) ![classcard 465](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/35.jpeg) ![classcard 466](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/183.jpeg) ![classcard 467](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/397.jpeg) ![classcard 468](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/229.jpeg) ![classcard 469](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/101.jpeg) ![classcard 470](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/46.jpeg) ![classcard 471](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/111.jpeg) ![classcard 472](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/367.jpeg) ![classcard 473](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/463.jpeg) ![classcard 474](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/418.jpeg) ![classcard 475](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/143.jpeg) ![classcard 476](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/241.jpeg) ![classcard 477](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/138.jpeg) ![classcard 478](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/126.jpeg) ![classcard 479](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/438.jpeg) ![classcard 480](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/481.jpeg) ![classcard 481](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/313.jpeg) ![classcard 482](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/271.jpeg) ![classcard 483](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/409.jpeg) ![classcard 484](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/178.jpeg) ![classcard 485](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/102.jpeg) ![classcard 486](https://huggingface.co/sd-concepts-library/gba-fe-class-cards/resolve/main/concept_images/426.jpeg)
2d80de896358446a4682537bf3c7fc4c
fathyshalab/all-roberta-large-v1-travel-4-16-5-oos
fathyshalab
roberta
11
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,515
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-travel-4-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1384 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7625 | 1.0 | 1 | 2.5258 | 0.2933 | | 2.0955 | 2.0 | 2 | 2.3775 | 0.3333 | | 1.7076 | 3.0 | 3 | 2.2590 | 0.38 | | 1.3257 | 4.0 | 4 | 2.1788 | 0.4089 | | 1.1109 | 5.0 | 5 | 2.1384 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
a2a355439776eaa9f31d32745fb620cd
adeebt/opus-mt-en-ml-finetuned-en-to-ml
adeebt
marian
13
1
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,343
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # adeebt/opus-mt-en-ml-finetuned-en-to-ml This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ml](https://huggingface.co/Helsinki-NLP/opus-mt-en-ml) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5102 - Validation Loss: 2.2650 - Train Bleu: 6.9525 - Train Gen Len: 22.3542 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.0002, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:-----:| | 2.5102 | 2.2650 | 6.9525 | 22.3542 | 0 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
9378edd31e949a19550e01bb19ee5dcf
hr16/any-ely-wd-ira-olympus-3500
hr16
null
17
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
542
false
### Model Dreambooth concept any-ely-wd-ira-olympus-3500 được train bởi hr16 bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br> Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br> Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Ảnh mẫu của concept: WIP
902e3398922e0d3c0a3497ba27b8348c
usvsnsp/code-vs-nl
usvsnsp
distilbert
13
20
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['bookcorpus', 'codeparrot/github-code']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,633
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-vs-nl This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [bookcorpus](https://huggingface.co/datasets/bookcorpus) for text and [codeparrot/github-code](https://huggingface.co/datasets/codeparrot/github-code) for code datasets. It achieves the following results on the evaluation set: - Loss: 0.5180 - Accuracy: 0.9951 - F1 Score: 0.9950 ## Model description As it's a finetuned model, it's architecture is same as distilbert-base-uncased for Sequence Classification ## Intended uses & limitations Can be used to classify documents into text and code ## Training and evaluation data It is a mix of above two datasets, equally random sampled ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 256 - eval_batch_size: 1024 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.5732 | 0.07 | 500 | 0.5658 | 0.9934 | 0.9934 | | 0.5254 | 0.14 | 1000 | 0.5180 | 0.9951 | 0.9950 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
d61085bb2c18ca24029dc5032ac1715e
espnet/kan-bayashi_vctk_xvector_transformer
espnet
null
25
0
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['en']
['vctk']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
1,804
false
## Example ESPnet2 TTS model ### `kan-bayashi/vctk_xvector_transformer` ♻️ Imported from https://zenodo.org/record/4393279/ This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
7e30b80bc35d8532d7a8c8a5e4ff0580
sayakpaul/glpn-nyu-finetuned-diode-230103-091356
sayakpaul
glpn
7
1
transformers
0
depth-estimation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['vision', 'depth-estimation', 'generated_from_trainer']
true
true
true
14,187
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glpn-nyu-finetuned-diode-230103-091356 This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.4360 - Mae: 0.4251 - Rmse: 0.6169 - Abs Rel: 0.4500 - Log Mae: 0.1721 - Log Rmse: 0.2269 - Delta1: 0.3828 - Delta2: 0.6326 - Delta3: 0.8051 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:| | 1.0762 | 1.0 | 72 | 0.5031 | 0.4779 | 0.6690 | 0.5503 | 0.2006 | 0.2591 | 0.3020 | 0.5337 | 0.8000 | | 0.478 | 2.0 | 144 | 0.4653 | 0.4509 | 0.6307 | 0.4891 | 0.1861 | 0.2377 | 0.3300 | 0.5805 | 0.7734 | | 0.4668 | 3.0 | 216 | 0.4845 | 0.4712 | 0.6373 | 0.5469 | 0.1963 | 0.2471 | 0.3110 | 0.5254 | 0.7235 | | 0.4389 | 4.0 | 288 | 0.4587 | 0.4368 | 0.6219 | 0.4887 | 0.1787 | 0.2344 | 0.3578 | 0.6099 | 0.7926 | | 0.4626 | 5.0 | 360 | 0.4879 | 0.4662 | 0.6351 | 0.5617 | 0.1937 | 0.2482 | 0.3135 | 0.5462 | 0.7395 | | 0.4534 | 6.0 | 432 | 0.4638 | 0.4422 | 0.6236 | 0.4951 | 0.1810 | 0.2358 | 0.3606 | 0.5844 | 0.7831 | | 0.4108 | 7.0 | 504 | 0.4688 | 0.4508 | 0.6279 | 0.5050 | 0.1856 | 0.2385 | 0.3426 | 0.5701 | 0.7623 | | 0.3832 | 8.0 | 576 | 0.4759 | 0.4533 | 0.6284 | 0.5257 | 0.1869 | 0.2411 | 0.3331 | 0.5701 | 0.7617 | | 0.4097 | 9.0 | 648 | 0.4771 | 0.4501 | 0.6303 | 0.5361 | 0.1855 | 0.2433 | 0.3454 | 0.5838 | 0.7609 | | 0.3799 | 10.0 | 720 | 0.4575 | 0.4375 | 0.6240 | 0.4874 | 0.1790 | 0.2349 | 0.3669 | 0.6032 | 0.7916 | | 0.3659 | 11.0 | 792 | 0.4718 | 0.4590 | 0.6298 | 0.5176 | 0.1893 | 0.2396 | 0.3283 | 0.5502 | 0.7368 | | 0.4145 | 12.0 | 864 | 0.4776 | 0.4561 | 0.6298 | 0.5325 | 0.1883 | 0.2421 | 0.3333 | 0.5611 | 0.7540 | | 0.4224 | 13.0 | 936 | 0.4320 | 0.4138 | 0.6202 | 0.4013 | 0.1655 | 0.2232 | 0.4217 | 0.6641 | 0.8004 | | 0.4142 | 14.0 | 1008 | 0.4597 | 0.4440 | 0.6234 | 0.4842 | 0.1813 | 0.2330 | 0.3520 | 0.5895 | 0.7617 | | 0.4393 | 15.0 | 1080 | 0.4333 | 0.4251 | 0.6197 | 0.4182 | 0.1712 | 0.2225 | 0.3787 | 0.6303 | 0.8100 | | 0.4045 | 16.0 | 1152 | 0.4603 | 0.4356 | 0.6197 | 0.4819 | 0.1776 | 0.2322 | 0.3635 | 0.6050 | 0.7858 | | 0.3708 | 17.0 | 1224 | 0.4738 | 0.4567 | 0.6292 | 0.5264 | 0.1886 | 0.2411 | 0.3283 | 0.5557 | 0.7596 | | 0.4042 | 18.0 | 1296 | 0.5004 | 0.4802 | 0.6423 | 0.6101 | 0.2008 | 0.2560 | 0.3022 | 0.5165 | 0.6931 | | 0.3763 | 19.0 | 1368 | 0.4501 | 0.4361 | 0.6213 | 0.4723 | 0.1772 | 0.2303 | 0.3634 | 0.6034 | 0.7889 | | 0.4084 | 20.0 | 1440 | 0.4272 | 0.4133 | 0.6208 | 0.3958 | 0.1649 | 0.2226 | 0.4284 | 0.6684 | 0.8009 | | 0.3637 | 21.0 | 1512 | 0.4307 | 0.4145 | 0.6199 | 0.4134 | 0.1665 | 0.2241 | 0.3957 | 0.6847 | 0.8137 | | 0.3655 | 22.0 | 1584 | 0.4591 | 0.4374 | 0.6370 | 0.4594 | 0.1791 | 0.2384 | 0.3816 | 0.6264 | 0.7826 | | 0.3844 | 23.0 | 1656 | 0.4692 | 0.4444 | 0.6273 | 0.5241 | 0.1824 | 0.2407 | 0.3540 | 0.5990 | 0.7756 | | 0.428 | 24.0 | 1728 | 0.4982 | 0.4753 | 0.6403 | 0.6084 | 0.1984 | 0.2552 | 0.3099 | 0.5233 | 0.7204 | | 0.4051 | 25.0 | 1800 | 0.4824 | 0.4618 | 0.6329 | 0.5533 | 0.1915 | 0.2461 | 0.3248 | 0.5495 | 0.7415 | | 0.3584 | 26.0 | 1872 | 0.4434 | 0.4207 | 0.6177 | 0.4468 | 0.1694 | 0.2277 | 0.3975 | 0.6442 | 0.8038 | | 0.3443 | 27.0 | 1944 | 0.4602 | 0.4434 | 0.6241 | 0.4912 | 0.1822 | 0.2351 | 0.3431 | 0.5877 | 0.7893 | | 0.3714 | 28.0 | 2016 | 0.4818 | 0.4594 | 0.6316 | 0.5521 | 0.1900 | 0.2455 | 0.3283 | 0.5567 | 0.7493 | | 0.3688 | 29.0 | 2088 | 0.4443 | 0.4215 | 0.6242 | 0.4386 | 0.1702 | 0.2294 | 0.4024 | 0.6522 | 0.8065 | | 0.3615 | 30.0 | 2160 | 0.4462 | 0.4291 | 0.6189 | 0.4500 | 0.1739 | 0.2277 | 0.3792 | 0.6208 | 0.7896 | | 0.3655 | 31.0 | 2232 | 0.4808 | 0.4574 | 0.6305 | 0.5524 | 0.1893 | 0.2452 | 0.3322 | 0.5590 | 0.7460 | | 0.3576 | 32.0 | 2304 | 0.4321 | 0.4102 | 0.6182 | 0.4079 | 0.1640 | 0.2241 | 0.4296 | 0.6713 | 0.8074 | | 0.3947 | 33.0 | 2376 | 0.4468 | 0.4298 | 0.6232 | 0.4574 | 0.1744 | 0.2306 | 0.3873 | 0.6163 | 0.7873 | | 0.3402 | 34.0 | 2448 | 0.4565 | 0.4352 | 0.6195 | 0.4913 | 0.1776 | 0.2337 | 0.3734 | 0.6039 | 0.7865 | | 0.3412 | 35.0 | 2520 | 0.4438 | 0.4261 | 0.6180 | 0.4546 | 0.1728 | 0.2279 | 0.3778 | 0.6252 | 0.8043 | | 0.3547 | 36.0 | 2592 | 0.4577 | 0.4416 | 0.6218 | 0.4868 | 0.1807 | 0.2329 | 0.3517 | 0.5862 | 0.7862 | | 0.3425 | 37.0 | 2664 | 0.4682 | 0.4511 | 0.6285 | 0.5210 | 0.1860 | 0.2406 | 0.3411 | 0.5748 | 0.7694 | | 0.3853 | 38.0 | 2736 | 0.4752 | 0.4514 | 0.6289 | 0.5458 | 0.1863 | 0.2438 | 0.3408 | 0.5721 | 0.7760 | | 0.3643 | 39.0 | 2808 | 0.4737 | 0.4547 | 0.6291 | 0.5401 | 0.1875 | 0.2428 | 0.3316 | 0.5673 | 0.7617 | | 0.398 | 40.0 | 2880 | 0.4662 | 0.4467 | 0.6274 | 0.5124 | 0.1838 | 0.2394 | 0.3514 | 0.5823 | 0.7700 | | 0.3579 | 41.0 | 2952 | 0.4781 | 0.4545 | 0.6290 | 0.5513 | 0.1880 | 0.2446 | 0.3343 | 0.5624 | 0.7718 | | 0.3545 | 42.0 | 3024 | 0.4460 | 0.4277 | 0.6221 | 0.4553 | 0.1730 | 0.2294 | 0.3862 | 0.6285 | 0.7999 | | 0.3527 | 43.0 | 3096 | 0.4330 | 0.4153 | 0.6169 | 0.4221 | 0.1668 | 0.2240 | 0.4106 | 0.6618 | 0.8084 | | 0.3251 | 44.0 | 3168 | 0.4503 | 0.4286 | 0.6172 | 0.4781 | 0.1744 | 0.2313 | 0.3725 | 0.6224 | 0.8095 | | 0.3433 | 45.0 | 3240 | 0.4471 | 0.4346 | 0.6187 | 0.4652 | 0.1772 | 0.2293 | 0.3606 | 0.6043 | 0.7952 | | 0.3607 | 46.0 | 3312 | 0.4474 | 0.4263 | 0.6166 | 0.4658 | 0.1728 | 0.2293 | 0.3835 | 0.6287 | 0.8039 | | 0.3722 | 47.0 | 3384 | 0.4527 | 0.4337 | 0.6205 | 0.4857 | 0.1768 | 0.2329 | 0.3696 | 0.6084 | 0.7922 | | 0.3322 | 48.0 | 3456 | 0.4629 | 0.4431 | 0.6236 | 0.5118 | 0.1818 | 0.2373 | 0.3460 | 0.5897 | 0.7954 | | 0.3624 | 49.0 | 3528 | 0.4431 | 0.4304 | 0.6203 | 0.4511 | 0.1742 | 0.2277 | 0.3827 | 0.6152 | 0.7917 | | 0.3386 | 50.0 | 3600 | 0.4475 | 0.4260 | 0.6173 | 0.4697 | 0.1727 | 0.2301 | 0.3870 | 0.6283 | 0.8102 | | 0.3316 | 51.0 | 3672 | 0.4558 | 0.4328 | 0.6194 | 0.4982 | 0.1770 | 0.2345 | 0.3618 | 0.6120 | 0.8124 | | 0.3259 | 52.0 | 3744 | 0.4316 | 0.4084 | 0.6165 | 0.4234 | 0.1630 | 0.2245 | 0.4311 | 0.6809 | 0.8148 | | 0.3299 | 53.0 | 3816 | 0.4489 | 0.4222 | 0.6198 | 0.4779 | 0.1706 | 0.2327 | 0.4049 | 0.6441 | 0.8021 | | 0.3334 | 54.0 | 3888 | 0.4831 | 0.4598 | 0.6319 | 0.5716 | 0.1902 | 0.2476 | 0.3281 | 0.5597 | 0.7549 | | 0.3342 | 55.0 | 3960 | 0.4478 | 0.4288 | 0.6166 | 0.4786 | 0.1745 | 0.2310 | 0.3749 | 0.6218 | 0.8091 | | 0.3276 | 56.0 | 4032 | 0.4524 | 0.4342 | 0.6192 | 0.4852 | 0.1773 | 0.2326 | 0.3596 | 0.6113 | 0.8007 | | 0.326 | 57.0 | 4104 | 0.4411 | 0.4226 | 0.6162 | 0.4486 | 0.1704 | 0.2268 | 0.3947 | 0.6403 | 0.7959 | | 0.3429 | 58.0 | 4176 | 0.4578 | 0.4418 | 0.6221 | 0.4961 | 0.1812 | 0.2349 | 0.3497 | 0.5956 | 0.7750 | | 0.3347 | 59.0 | 4248 | 0.4586 | 0.4409 | 0.6220 | 0.4946 | 0.1808 | 0.2347 | 0.3439 | 0.6004 | 0.7869 | | 0.3215 | 60.0 | 4320 | 0.4583 | 0.4382 | 0.6232 | 0.4974 | 0.1789 | 0.2357 | 0.3667 | 0.6008 | 0.7855 | | 0.331 | 61.0 | 4392 | 0.4412 | 0.4206 | 0.6145 | 0.4579 | 0.1699 | 0.2276 | 0.3966 | 0.6413 | 0.8047 | | 0.3124 | 62.0 | 4464 | 0.4455 | 0.4236 | 0.6181 | 0.4727 | 0.1715 | 0.2313 | 0.3902 | 0.6417 | 0.8098 | | 0.322 | 63.0 | 4536 | 0.4406 | 0.4230 | 0.6143 | 0.4548 | 0.1716 | 0.2269 | 0.3775 | 0.6425 | 0.8115 | | 0.3194 | 64.0 | 4608 | 0.4473 | 0.4331 | 0.6193 | 0.4657 | 0.1765 | 0.2297 | 0.3606 | 0.6122 | 0.8014 | | 0.3159 | 65.0 | 4680 | 0.4407 | 0.4225 | 0.6186 | 0.4548 | 0.1712 | 0.2293 | 0.3913 | 0.6433 | 0.8075 | | 0.3118 | 66.0 | 4752 | 0.4478 | 0.4258 | 0.6169 | 0.4801 | 0.1728 | 0.2315 | 0.3762 | 0.6391 | 0.8064 | | 0.336 | 67.0 | 4824 | 0.4659 | 0.4463 | 0.6252 | 0.5210 | 0.1834 | 0.2394 | 0.3464 | 0.5820 | 0.7786 | | 0.3233 | 68.0 | 4896 | 0.4370 | 0.4208 | 0.6168 | 0.4452 | 0.1696 | 0.2265 | 0.4019 | 0.6425 | 0.8059 | | 0.3285 | 69.0 | 4968 | 0.4479 | 0.4340 | 0.6189 | 0.4773 | 0.1771 | 0.2312 | 0.3609 | 0.6136 | 0.7972 | | 0.3186 | 70.0 | 5040 | 0.4469 | 0.4308 | 0.6198 | 0.4698 | 0.1751 | 0.2310 | 0.3741 | 0.6219 | 0.7966 | | 0.3351 | 71.0 | 5112 | 0.4476 | 0.4292 | 0.6176 | 0.4769 | 0.1745 | 0.2311 | 0.3718 | 0.6220 | 0.8035 | | 0.3286 | 72.0 | 5184 | 0.4415 | 0.4229 | 0.6155 | 0.4655 | 0.1713 | 0.2289 | 0.3816 | 0.6376 | 0.8117 | | 0.3135 | 73.0 | 5256 | 0.4527 | 0.4335 | 0.6198 | 0.4918 | 0.1769 | 0.2338 | 0.3621 | 0.6152 | 0.8036 | | 0.3244 | 74.0 | 5328 | 0.4449 | 0.4290 | 0.6171 | 0.4685 | 0.1746 | 0.2296 | 0.3667 | 0.6234 | 0.8073 | | 0.3253 | 75.0 | 5400 | 0.4450 | 0.4303 | 0.6182 | 0.4680 | 0.1750 | 0.2296 | 0.3703 | 0.6185 | 0.8013 | | 0.3072 | 76.0 | 5472 | 0.4312 | 0.4212 | 0.6161 | 0.4337 | 0.1700 | 0.2242 | 0.3840 | 0.6411 | 0.8104 | | 0.3159 | 77.0 | 5544 | 0.4434 | 0.4314 | 0.6186 | 0.4636 | 0.1754 | 0.2290 | 0.3643 | 0.6171 | 0.7996 | | 0.3176 | 78.0 | 5616 | 0.4319 | 0.4207 | 0.6177 | 0.4330 | 0.1695 | 0.2249 | 0.3889 | 0.6524 | 0.8080 | | 0.3243 | 79.0 | 5688 | 0.4432 | 0.4304 | 0.6186 | 0.4698 | 0.1752 | 0.2302 | 0.3667 | 0.6218 | 0.8058 | | 0.3183 | 80.0 | 5760 | 0.4438 | 0.4288 | 0.6175 | 0.4665 | 0.1742 | 0.2294 | 0.3730 | 0.6235 | 0.8030 | | 0.323 | 81.0 | 5832 | 0.4365 | 0.4248 | 0.6170 | 0.4480 | 0.1716 | 0.2263 | 0.3820 | 0.6313 | 0.8056 | | 0.3348 | 82.0 | 5904 | 0.4385 | 0.4280 | 0.6179 | 0.4532 | 0.1738 | 0.2273 | 0.3651 | 0.6249 | 0.8099 | | 0.2948 | 83.0 | 5976 | 0.4456 | 0.4330 | 0.6190 | 0.4727 | 0.1763 | 0.2305 | 0.3622 | 0.6121 | 0.7981 | | 0.3156 | 84.0 | 6048 | 0.4349 | 0.4236 | 0.6155 | 0.4442 | 0.1712 | 0.2252 | 0.3834 | 0.6331 | 0.8086 | | 0.3227 | 85.0 | 6120 | 0.4352 | 0.4251 | 0.6160 | 0.4423 | 0.1719 | 0.2250 | 0.3799 | 0.6293 | 0.8055 | | 0.3044 | 86.0 | 6192 | 0.4349 | 0.4235 | 0.6165 | 0.4444 | 0.1714 | 0.2259 | 0.3858 | 0.6312 | 0.8108 | | 0.3067 | 87.0 | 6264 | 0.4293 | 0.4214 | 0.6150 | 0.4293 | 0.1700 | 0.2229 | 0.3862 | 0.6397 | 0.8102 | | 0.3083 | 88.0 | 6336 | 0.4260 | 0.4164 | 0.6139 | 0.4229 | 0.1673 | 0.2221 | 0.3989 | 0.6536 | 0.8126 | | 0.2989 | 89.0 | 6408 | 0.4381 | 0.4270 | 0.6168 | 0.4526 | 0.1731 | 0.2270 | 0.3766 | 0.6248 | 0.8051 | | 0.3232 | 90.0 | 6480 | 0.4352 | 0.4230 | 0.6158 | 0.4480 | 0.1711 | 0.2263 | 0.3854 | 0.6358 | 0.8112 | | 0.3201 | 91.0 | 6552 | 0.4361 | 0.4242 | 0.6164 | 0.4462 | 0.1718 | 0.2262 | 0.3842 | 0.6327 | 0.8078 | | 0.3096 | 92.0 | 6624 | 0.4390 | 0.4273 | 0.6171 | 0.4563 | 0.1733 | 0.2279 | 0.3790 | 0.6237 | 0.8046 | | 0.322 | 93.0 | 6696 | 0.4338 | 0.4229 | 0.6157 | 0.4447 | 0.1709 | 0.2258 | 0.3889 | 0.6351 | 0.8069 | | 0.3096 | 94.0 | 6768 | 0.4348 | 0.4238 | 0.6160 | 0.4448 | 0.1714 | 0.2256 | 0.3839 | 0.6342 | 0.8077 | | 0.3067 | 95.0 | 6840 | 0.4414 | 0.4298 | 0.6181 | 0.4628 | 0.1748 | 0.2290 | 0.3707 | 0.6205 | 0.8027 | | 0.3198 | 96.0 | 6912 | 0.4334 | 0.4228 | 0.6162 | 0.4434 | 0.1709 | 0.2258 | 0.3872 | 0.6370 | 0.8077 | | 0.295 | 97.0 | 6984 | 0.4367 | 0.4261 | 0.6169 | 0.4507 | 0.1728 | 0.2269 | 0.3791 | 0.6283 | 0.8045 | | 0.305 | 98.0 | 7056 | 0.4373 | 0.4266 | 0.6171 | 0.4524 | 0.1730 | 0.2273 | 0.3781 | 0.6280 | 0.8046 | | 0.3304 | 99.0 | 7128 | 0.4334 | 0.4230 | 0.6162 | 0.4432 | 0.1709 | 0.2257 | 0.3874 | 0.6378 | 0.8062 | | 0.3099 | 100.0 | 7200 | 0.4360 | 0.4251 | 0.6169 | 0.4500 | 0.1721 | 0.2269 | 0.3828 | 0.6326 | 0.8051 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
4f89f148e291d96ba653b512bb737059
abhishek/convnext-tiny-finetuned-dogfood
abhishek
convnext
14
5
transformers
1
image-classification
true
false
false
apache-2.0
null
['imagefolder', 'lewtun/dog_food']
null
4
3
1
0
0
0
0
['generated_from_trainer']
true
true
true
1,333
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-finetuned-dogfood This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the lewtun/dog_food dataset. It achieves the following results on the evaluation set: - Loss: 0.9277 - Accuracy: 0.7253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0681 | 1.0 | 16 | 0.9125 | 0.7422 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
b448418dbfe724b333b55433252d24fc
vasista22/whisper-kannada-small
vasista22
whisper
12
12
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['kn']
null
null
0
0
0
0
0
0
0
['whisper-event']
true
true
true
1,322
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Kannada Small This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Kannada data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint. ## Training and evaluation data Training Data: MILE ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Google/Fleurs Train+Dev set. Evaluation Data: Google/Fleurs Test set, MILE Test set, OpenSLR. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.7e-05 - train_batch_size: 48 - eval_batch_size: 32 - seed: 22 - optimizer: adamw_bnb_8bit - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - training_steps: 12033 (terminated upon convergence. Initially set to 51570 steps) - mixed_precision_training: True ## Acknowledgement This work was done at Speech Lab, IITM. The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
10f187397293575228a07bc4a733ba9b
DLL888/roberta-base-squad
DLL888
roberta
11
5
transformers
0
question-answering
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,177
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # DLL888/roberta-base-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7054 - Train End Logits Accuracy: 0.8022 - Train Start Logits Accuracy: 0.7586 - Validation Loss: 0.8224 - Validation End Logits Accuracy: 0.7692 - Validation Start Logits Accuracy: 0.7402 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10570, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 500, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: mixed_float16 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.1613 | 0.7038 | 0.6632 | 0.8676 | 0.7626 | 0.7342 | 0 | | 0.7054 | 0.8022 | 0.7586 | 0.8224 | 0.7692 | 0.7402 | 1 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
3a4b7d75a268755328cc963542453253
ManujArora/t5-base-squadqtngen
ManujArora
t5
24
26
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['question_generator', 'generated_from_trainer']
true
true
true
1,527
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-squadqtngen This model is a fine-tuned version of [ManujArora/t5-base-squadqtngen](https://huggingface.co/ManujArora/t5-base-squadqtngen) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 248 | 1.6398 | | No log | 2.0 | 496 | 1.6440 | | No log | 3.0 | 744 | 1.6594 | | No log | 4.0 | 992 | 1.6720 | | No log | 5.0 | 1240 | 1.6824 | | No log | 6.0 | 1488 | 1.6949 | | No log | 7.0 | 1736 | 1.7032 | | No log | 8.0 | 1984 | 1.7049 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
ef529c7eaa4beb152113b3e081bfccaf
Yehor/wav2vec2-xls-r-300m-uk-with-small-lm-noisy
Yehor
wav2vec2
14
3
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['uk']
['mozilla-foundation/common_voice_10_0']
null
0
0
0
0
0
0
0
[]
false
true
true
1,132
false
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk ⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk This model has been trained on noisy data in order to make the acoustic model robust to noisy audio data. This model has apostrophes and hyphens. The language model is trained on the texts of the Common Voice dataset, which is used during training. Special thanks for noised data to **Dmytro Chaplynsky**, https://lang.org.ua Noisy dataset: - Transcriptions: https://www.dropbox.com/s/ohj3y2cq8f4207a/transcriptions.zip?dl=0 - Audio files: https://www.dropbox.com/s/v8crgclt9opbrv1/data.zip?dl=0 Metrics: | Dataset | CER | WER | |-|-|-| | CV10 (no LM) | 0.0515 | 0.2617 | | CV10 (with LM) | 0.0148 | 0.0524 | Metrics on noisy data with [standard model](https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm): | Dataset | CER | WER | |-|-|-| | CV10 (no LM) | 0.1064 | 0.3926 | | CV10 (with LM) | 0.0497 | 0.1265 | More: - The same model, but trained on raw Common Voice data: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm
407a98929e8b80bdb5a8f8ea5f6328a9
bansals10/wav2vec2-large-xls-r-300m-turkish-colab
bansals10
wav2vec2
19
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,104
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
7bdf62942ee4d4df42f390375b39123d
Wende/bert-finetuned-ner1
Wende
bert
12
17
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,521
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0584 - Precision: 0.9286 - Recall: 0.9475 - F1: 0.9379 - Accuracy: 0.9859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2183 | 1.0 | 878 | 0.0753 | 0.9087 | 0.9291 | 0.9188 | 0.9800 | | 0.0462 | 2.0 | 1756 | 0.0614 | 0.9329 | 0.9470 | 0.9399 | 0.9858 | | 0.0244 | 3.0 | 2634 | 0.0584 | 0.9286 | 0.9475 | 0.9379 | 0.9859 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.8.2+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
53242729cb69028a9cccc2f315d69f17
sd-concepts-library/bamse-og-kylling
sd-concepts-library
null
10
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,193
false
### Bamse og kylling on Stable Diffusion This is the `<bamse-kylling>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<bamse-kylling> 0](https://huggingface.co/sd-concepts-library/bamse-og-kylling/resolve/main/concept_images/2.jpeg) ![<bamse-kylling> 1](https://huggingface.co/sd-concepts-library/bamse-og-kylling/resolve/main/concept_images/1.jpeg) ![<bamse-kylling> 2](https://huggingface.co/sd-concepts-library/bamse-og-kylling/resolve/main/concept_images/0.jpeg) ![<bamse-kylling> 3](https://huggingface.co/sd-concepts-library/bamse-og-kylling/resolve/main/concept_images/3.jpeg) ![<bamse-kylling> 4](https://huggingface.co/sd-concepts-library/bamse-og-kylling/resolve/main/concept_images/4.jpeg)
7d6f05fdfe408cb9729b93fa4f724e81
pcuenq/ddpm-ema-pets-64
pcuenq
null
16
1
diffusers
0
null
false
false
false
apache-2.0
['en']
['pcuenq/oxford-pets']
null
0
0
0
0
0
0
0
[]
false
true
true
1,206
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-ema-pets-64 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `pcuenq/oxford-pets` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: no ### Training results 📈 [TensorBoard logs](https://huggingface.co/pcuenq/ddpm-ema-pets-64/tensorboard?#scalars)
66395ffea2fe6631dc562a4a0ceb5181
elopezlopez/distilbert-base-uncased_fold_7_ternary_v1
elopezlopez
distilbert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,659
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_fold_7_ternary_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0462 - F1: 0.7836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 291 | 0.5719 | 0.7490 | | 0.5541 | 2.0 | 582 | 0.5563 | 0.7836 | | 0.5541 | 3.0 | 873 | 0.7301 | 0.7849 | | 0.2509 | 4.0 | 1164 | 0.8073 | 0.7926 | | 0.2509 | 5.0 | 1455 | 1.0842 | 0.7823 | | 0.1182 | 6.0 | 1746 | 1.1721 | 0.7900 | | 0.0537 | 7.0 | 2037 | 1.4060 | 0.7785 | | 0.0537 | 8.0 | 2328 | 1.4497 | 0.7836 | | 0.0262 | 9.0 | 2619 | 1.4722 | 0.7708 | | 0.0262 | 10.0 | 2910 | 1.6529 | 0.7772 | | 0.0131 | 11.0 | 3201 | 1.6573 | 0.7862 | | 0.0131 | 12.0 | 3492 | 1.6986 | 0.7823 | | 0.0115 | 13.0 | 3783 | 1.7765 | 0.7810 | | 0.0098 | 14.0 | 4074 | 1.8036 | 0.7862 | | 0.0098 | 15.0 | 4365 | 1.7684 | 0.7926 | | 0.0028 | 16.0 | 4656 | 1.8385 | 0.7836 | | 0.0028 | 17.0 | 4947 | 1.7903 | 0.7887 | | 0.0054 | 18.0 | 5238 | 1.9065 | 0.7810 | | 0.0007 | 19.0 | 5529 | 1.9331 | 0.7875 | | 0.0007 | 20.0 | 5820 | 1.9384 | 0.7849 | | 0.0006 | 21.0 | 6111 | 1.8687 | 0.7887 | | 0.0006 | 22.0 | 6402 | 2.0603 | 0.7785 | | 0.0009 | 23.0 | 6693 | 2.0403 | 0.7836 | | 0.0009 | 24.0 | 6984 | 2.0348 | 0.7810 | | 0.0005 | 25.0 | 7275 | 2.0462 | 0.7836 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
7b6c7a54038b2afaab686d6a1b68fc7d
domenicrosati/deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier
domenicrosati
deberta-v2
17
3
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['text-classification', 'generated_from_trainer']
true
true
true
1,575
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0979 - Accuracy: 0.9682 - F1: 0.8332 - Recall: 0.8466 - Precision: 0.8202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.1539 | 1.0 | 6667 | 0.1237 | 0.9584 | 0.7668 | 0.7307 | 0.8067 | | 0.1271 | 2.0 | 13334 | 0.0979 | 0.9682 | 0.8332 | 0.8466 | 0.8202 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
66af82bfbc597e73fa34fe4338b16e39
gunghio/xlm-roberta-base-finetuned-panx-ner
gunghio
xlm-roberta
9
10
transformers
0
token-classification
true
false
false
['mit']
['it', 'en', 'de', 'fr', 'es', 'multilingual']
['xtreme']
null
1
0
1
0
0
0
0
[]
false
true
true
1,749
false
# gunghio/xlm-roberta-base-finetuned-panx-ner This model was trained starting from xlm-roberta-base on a subset of xtreme dataset. `xtreme` datasets subsets used are: PAN-X.{lang}. Language used for training/validation are: italian, english, german, french and spanish. Only 75% of the whole dataset was used. ## Intended uses & limitations Fine-tuned model can be used for Named Entity Recognition in it, en, de, fr, and es. ## Training and evaluation data Training dataset: [xtreme](https://huggingface.co/datasets/xtreme) ### Training results It achieves the following results on the evaluation set: - Precision: 0.8744154472771157 - Recall: 0.8791424269015351 - F1: 0.8767725659462058 - Accuracy: 0.9432040948504613 Details: | Label | Precision | Recall | F1-Score | Support | |---------|-----------|--------|----------|---------| | PER | 0.922 | 0.908 | 0.915 | 26639 | | LOC | 0.880 | 0.906 | 0.892 | 37623 | | ORG | 0.821 | 0.816 | 0.818 | 28045 | | Overall | 0.874 | 0.879 | 0.877 | 92307 | ## Usage Set aggregation stragey according to [documentation](https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/pipelines#transformers.TokenClassificationPipeline). ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("gunghio/xlm-roberta-base-finetuned-panx-ner") model = AutoModelForTokenClassification.from_pretrained("gunghio/xlm-roberta-base-finetuned-panx-ner") nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="first") example = "My name is Wolfgang and I live in Berlin" ner_results = nlp(example) print(ner_results) ```
f437564e3a93ba5e289e4d660f6aafc0
kashif/music-spectrogram-diffusion
kashif
null
11
24
diffusers
3
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers']
false
true
true
1,613
false
# Multi-instrument Music Synthesis with Spectrogram Diffusion ## Abstract An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes. <img src="https://storage.googleapis.com/music-synthesis-with-spectrogram-diffusion/architecture.png" alt="Architecture diagram">
7a0494b54fc8a70dfb0234e6f9e9a140
bigscience/mt0-xxl-p3
bigscience
mt5
14
128
transformers
1
text-generation
true
false
false
apache-2.0
['af', 'am', 'ar', 'az', 'be', 'bg', 'bn', 'ca', 'ceb', 'co', 'cs', 'cy', 'da', 'de', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fil', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'haw', 'hi', 'hmn', 'ht', 'hu', 'hy', 'ig', 'is', 'it', 'iw', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lb', 'lo', 'lt', 'lv', 'mg', 'mi', 'mk', 'ml', 'mn', 'mr', 'ms', 'mt', 'my', 'ne', 'nl', False, 'ny', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'sd', 'si', 'sk', 'sl', 'sm', 'sn', 'so', 'sq', 'sr', 'st', 'su', 'sv', 'sw', 'ta', 'te', 'tg', 'th', 'tr', 'uk', 'und', 'ur', 'uz', 'vi', 'xh', 'yi', 'yo', 'zh', 'zu']
['Muennighoff/P3', 'mc4']
null
0
0
0
0
0
0
0
[]
true
true
true
8,935
false
![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co) - **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-xxl-p3" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-xxl-p3" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-xxl-p3" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [mt5-xxl](https://huggingface.co/google/mt5-xxl), also refer to the `config.json` file - **Finetuning steps:** 7000 - **Finetuning tokens:** 1.29 billion - **Precision:** bfloat16 ## Hardware - **TPUs:** TPUv4-256 ## Software - **Orchestration:** [T5X](https://github.com/google-research/t5x) - **Neural networks:** [Jax](https://github.com/google/jax) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
491541671b960f3867c065d4ae9cc67d
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
anas-awadalla
bert
16
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,000
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
acaeb493b161d0a4b2f75f8dfd17c02e
domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt
domenicrosati
deberta-v2
16
6
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['fill-mask', 'generated_from_trainer']
true
true
true
1,600
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large-dapt-scientific-papers-pubmed-tapt This model is a fine-tuned version of [domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed](https://huggingface.co/domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4429 - Accuracy: 0.5915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 3.3855 | 1.0 | 4134 | 3.2334 | 0.4953 | | 2.9224 | 2.0 | 8268 | 2.8317 | 0.5430 | | 2.703 | 3.0 | 12402 | 2.6141 | 0.5665 | | 2.4963 | 4.0 | 16536 | 2.4918 | 0.5855 | | 2.399 | 5.0 | 20670 | 2.4429 | 0.5915 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
f0933a0e019bb9541ff820508cb34ae3
jonatasgrosman/exp_w2v2t_sv-se_vp-fr_s237
jonatasgrosman
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sv-SE']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'sv-SE']
false
true
true
475
false
# exp_w2v2t_sv-se_vp-fr_s237 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
78ff6718f010fb3757fc99c7c9ea7f2a
jx7789/xlm-roberta-base-finetuned-panx-en
jx7789
xlm-roberta
10
1
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3926 - F1: 0.6991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1415 | 1.0 | 50 | 0.5404 | 0.5163 | | 0.5045 | 2.0 | 100 | 0.4347 | 0.6498 | | 0.371 | 3.0 | 150 | 0.3926 | 0.6991 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
a4f7a372359915da17d5450b3d24ac95
scjnugacj/jurisbert
scjnugacj
roberta
13
562
transformers
5
fill-mask
true
false
false
other
['es']
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,586
false
# JurisBert JurisBert, es una iniciativa de la **Suprema Corte de Justicia de la Nación (SCJN) de México**, nace en agosto del 2020, a propuesta de la **Unidad General de Administración del Conocimiento Jurídico (UGACJ)**, para entrenar un Modelo del Lenguaje contextualizado al ámbito jurídico. Su principal objetivo es generar aplicaciones de **Procesamiento del Lenguaje Natural (PLN)** que coadyuven a la labor jurisdiccional del Alto Tribunal mediante el aprovechamiento del conocimiento de la SCJN plasmado en documentos no estructurados que generan las áreas jurisdiccionales. En 2021, esta iniciativa tomó mayor relevancia con la llegada de la Reforma Judicial y el inicio de la undécima época del SJF, puesto que la creación de JurisBert tiene como objetivos principales la ayuda a la identificación del precedente y la creación de Plataformas de Recuperación de Información. Como parte de la Transformación Digital impulsada por la SCJN, en razón de generar un esquema de “Gobierno Abierto” mediante la Colaboración e Innovación y en el contexto de la operación remota obligada por la contingencia sanitaria derivada del virus SARS COV 2, se pone a disposición de toda la comunidad esta innovación tecnológica pretendiendo con ello la retribución del conocimiento generado por el Alto Tribunal a la ciudadanía. Es su primer versión, JurisBert es un modelo del lenguaje basado en Transformadores, teniendo como base SpanBERTa ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("scjnugacj/jurisbert") model = AutoModel.from_pretrained("scjnugacj/jurisbert") ``` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="scjnugacj/jurisbert", tokenizer="scjnugacj/jurisbert" ) fill_mask("interés superior del <mask>.") [ { "score": 0.941512405872345, "token": 3152, "token_str": " menor", "sequence": "interés superior del menor" }, { "score": 0.046888645738363266, "token": 3337, "token_str": " niño", "sequence": "interés superior del niño" }, { "score": 0.004166217986494303, "token": 9386, "token_str": " adolescente", "sequence": "interés superior del adolescente" }, { "score": 0.0008063237182796001, "token": 4914, "token_str": " menores", "sequence": "interés superior del menores" }, { "score": 0.0006806919700466096, "token": 48133, "token_str": " infante", "sequence": "interés superior del infante" } ] ``` # Términos de uso Al descargar este modelo usted ha aceptado quedar vinculado por los términos establecidos en este aviso legal. El propietario del modelo se reserva el derecho de enmendar, modificar o sustituir estos términos de uso en cualquier momento y sin previo aviso. Cuando una persona o entidad despliegue o proporcione sistemas, servicios, y/o cualquier tecnología a terceros usando este modelo y/o alguno derivado del mismo, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y cumplir con la normativa aplicable en todo momento. En ningún caso el propietario de los modelos (SCJN – Suprema Corte de Justicia de la Nación) ni la ( UGACJ - Unidad General de Administración del Conocimiento Juridico) serán responsables de los resultados derivados del uso que se de a estos modelos. ## Uso previsto Este modelo fue creado con la finalidad de que cualquier persona o institución pueda crear herramientas de consulta de información jurídica del Estado Mexicano basados en modelos de lenguaje.
02956cb4f36c4db18aa99e09f35021d6
mrm8488/ddpm-ema-pokemon-64
mrm8488
null
6
6
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/pokemon']
null
0
0
0
0
0
0
0
[]
false
true
true
1,521
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-ema-pokemon-64 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/pokemon` dataset. ## Intended uses & limitations #### How to use ```python from diffusers import DDPMPipeline model_id = "mrm8488/ddpm-ema-pokemon-64" # load model and scheduler pipeline = DDPMPipeline.from_pretrained(model_id) # run pipeline in inference image = pipeline()["sample"] # save image image[0].save("pokemon.png") ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 128 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/mrm8488/ddpm-ema-pokemon-64/tensorboard?#scalars) > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Q Blocks](https://www.qblocks.cloud/)
d564745113fc9a5bff00b48361aa2083
neuralmind/bert-base-portuguese-cased
neuralmind
bert
10
609,957
transformers
49
fill-mask
true
true
true
mit
['pt']
['brWaC']
null
1
0
1
0
3
3
0
['bert', 'pytorch']
false
true
true
3,517
false
# BERTimbau Base (aka "bert-base-portuguese-cased") ![Bert holding a berimbau](https://imgur.com/JZ7Hynh.jpg) ## Introduction BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large. For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/). ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | | `neuralmind/bert-base-portuguese-cased` | BERT-Base | 12 | 110M | | `neuralmind/bert-large-portuguese-cased` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('neuralmind/bert-base-portuguese-cased') tokenizer = AutoTokenizer.from_pretrained('neuralmind/bert-base-portuguese-cased', do_lower_case=False) ``` ### Masked language modeling prediction example ```python from transformers import pipeline pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer) pipe('Tinha uma [MASK] no meio do caminho.') # [{'score': 0.14287759363651276, # 'sequence': '[CLS] Tinha uma pedra no meio do caminho. [SEP]', # 'token': 5028, # 'token_str': 'pedra'}, # {'score': 0.06213393807411194, # 'sequence': '[CLS] Tinha uma árvore no meio do caminho. [SEP]', # 'token': 7411, # 'token_str': 'árvore'}, # {'score': 0.05515013635158539, # 'sequence': '[CLS] Tinha uma estrada no meio do caminho. [SEP]', # 'token': 5675, # 'token_str': 'estrada'}, # {'score': 0.0299188531935215, # 'sequence': '[CLS] Tinha uma casa no meio do caminho. [SEP]', # 'token': 1105, # 'token_str': 'casa'}, # {'score': 0.025660505518317223, # 'sequence': '[CLS] Tinha uma cruz no meio do caminho. [SEP]', # 'token': 3466, # 'token_str': 'cruz'}] ``` ### For BERT embeddings ```python import torch model = AutoModel.from_pretrained('neuralmind/bert-base-portuguese-cased') input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt') with torch.no_grad(): outs = model(input_ids) encoded = outs[0][0, 1:-1] # Ignore [CLS] and [SEP] special tokens # encoded.shape: (8, 768) # tensor([[-0.0398, -0.3057, 0.2431, ..., -0.5420, 0.1857, -0.5775], # [-0.2926, -0.1957, 0.7020, ..., -0.2843, 0.0530, -0.4304], # [ 0.2463, -0.1467, 0.5496, ..., 0.3781, -0.2325, -0.5469], # ..., # [ 0.0662, 0.7817, 0.3486, ..., -0.4131, -0.2852, -0.2819], # [ 0.0662, 0.2845, 0.1871, ..., -0.2542, -0.2933, -0.0661], # [ 0.2761, -0.1657, 0.3288, ..., -0.2102, 0.0029, -0.2009]]) ``` ## Citation If you use our work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } ```
4f57e4eb5387b26e6f57516b13aab625
haanba/itscalling-mob-umamusume-concept
haanba
null
25
0
null
0
text-to-image
false
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
6,999
false
# Its Calling (Mob Umamusume) on Waifu Diffusion v1.3.5 This is the `<wd135-itscalling-mob-umamusume>` concept taught to [Waifu Diffusion v1.3.5](https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/models/wd-1-3-5_80000-fp32.ckpt) via Textual Inversion. ## Credits The training images were selectively taken from [Pixiv](https://www.pixiv.net), [Twitter](https://twitter.com), and in-game screenshots of Uma Musume Pretty Derby. A CSV file describing the original sources for most images is available in the [raw dataset archive file](./datasets/raw.7z). ## Input Here is the new concept you will be able to use as an `object`: ![<wd135-itscalling-mob-umamusume> input 0](./concept_images/91370005_p0_transparent_512x512.png) ![<wd135-itscalling-mob-umamusume> input 1](./concept_images/FgzUbx1aEAEFDdO_512x512.png) ![<wd135-itscalling-mob-umamusume> input 2](./concept_images/FH5MdF7acAA42RG_512x512.png) ![<wd135-itscalling-mob-umamusume> input 3](./concept_images/Fklj8U4aYAIOAGP_512x512.png) ![<wd135-itscalling-mob-umamusume> input 4](./concept_images/FRXH5ibUcAE4KLZ_512x512.png) ## Output Examples Some images that can be possibly generated by using the new concept: !["<wd135-itscalling-mob-umamusume>, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 3505534900 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000013.63c4d22c.3505534900.png) ```json { "model": "stable diffusion", "model_weights": "waifu-diffusion-1.3.5", "model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452", "app_id": "invoke-ai/InvokeAI", "app_version": "2.2.4", "image": { "prompt": [ { "prompt": "<wd135-itscalling-mob-umamusume>, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]", "weight": 1 } ], "steps": 64, "cfg_scale": 10, "threshold": 0, "perlin": 0, "height": 768, "width": 512, "seed": 3505534900, "seamless": false, "hires_fix": false, "type": "txt2img", "postprocessing": null, "sampler": "k_dpmpp_2", "variations": [] } } ``` !["<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 821696414 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000019.37833118.821696414.png) ```json { "model": "stable diffusion", "model_weights": "waifu-diffusion-1.3.5", "model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452", "app_id": "invoke-ai/InvokeAI", "app_version": "2.2.4", "image": { "prompt": [ { "prompt": "<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]", "weight": 1 } ], "steps": 64, "cfg_scale": 10, "threshold": 0, "perlin": 0, "height": 768, "width": 512, "seed": 821696414, "seamless": false, "hires_fix": false, "type": "txt2img", "postprocessing": null, "sampler": "k_dpmpp_2", "variations": [] } } ``` !["<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 460073536 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000020.58cf5625.460073536.png) ```json { "model": "stable diffusion", "model_weights": "waifu-diffusion-1.3.5", "model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452", "app_id": "invoke-ai/InvokeAI", "app_version": "2.2.4", "image": { "prompt": [ { "prompt": "<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, running outdoors park, white t-shirts black shorts, morning sunlight, pov from side looking at viewer cowboy shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]", "weight": 1 } ], "steps": 64, "cfg_scale": 10, "threshold": 0, "perlin": 0, "height": 768, "width": 512, "seed": 460073536, "seamless": false, "hires_fix": false, "type": "txt2img", "postprocessing": null, "sampler": "k_dpmpp_2", "variations": [] } } ``` !["<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, school sailor uniform white shirt purple pleated skirt, standing looking at viewer smile one eye closed arms behind back, standing indoors empty classroom, dusk sunset ambience light, full body shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]" -s 64 -S 1869090925 -W 512 -H 768 -C 10 -A k_dpmpp_2](./examples/000032.f35340f2.1869090925.png) ```json { "model": "stable diffusion", "model_weights": "waifu-diffusion-1.3.5", "model_hash": "b438efac4434af4e482d20cdfcea64067f8dfec438628261d2f2aa60ffc41452", "app_id": "invoke-ai/InvokeAI", "app_version": "2.2.4", "image": { "prompt": [ { "prompt": "<wd135-itscalling-mob-umamusume> horse ears horse tail horse girl, school sailor uniform white shirt purple pleated skirt, standing looking at viewer smile one eye closed arms behind back, standing indoors empty classroom, dusk sunset ambience light, full body shot, [bad anatomy, bad hands, bad perspective, bad proportions, blurry, censored, cropped, error, extra arms, extra ears, fewer digits, jpeg artifacts, lowres, multiple legs, out of frame, poorly drawn]", "weight": 1 } ], "steps": 64, "cfg_scale": 10, "threshold": 0, "perlin": 0, "height": 768, "width": 512, "seed": 1869090925, "seamless": false, "hires_fix": false, "type": "txt2img", "postprocessing": null, "sampler": "k_dpmpp_2", "variations": [] } } ``` ## License [MIT](./LICENSE).
14a9e5f7d84b36854498b1e779f8a8b1
lurker18/distilbert-base-uncased-finetuned-emotion
lurker18
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,344
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2220 - Accuracy: 0.9215 - F1: 0.9216 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8267 | 1.0 | 250 | 0.3110 | 0.909 | 0.9073 | | 0.252 | 2.0 | 500 | 0.2220 | 0.9215 | 0.9216 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.7.1+cu110 - Datasets 1.16.1 - Tokenizers 0.10.3
9be5929deeeaf899c8022e638ca867de
Laughify/aipom-from-pokemon-diffusion
Laughify
null
19
20
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image']
false
true
true
595
false
### Aipom_From_Pokémon-Diffusion Dreambooth model trained by Laughify with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
b2d3246dc12465dab9748a980c40861b
LiptaphX/wav2vec2-large-xls-r-300m-turkish-colab
LiptaphX
wav2vec2
15
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,105
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
f1d3d712331d520936e6f855f3a2eb63
jonatasgrosman/exp_w2v2t_pl_wav2vec2_s530
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pl']
false
true
true
456
false
# exp_w2v2t_pl_wav2vec2_s530 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
999bf37977899d73ef39f92453b68133
inverse-scaling/opt-2.7b_eval
inverse-scaling
opt
10
3
transformers
0
text-generation
true
true
true
other
['en']
null
null
19
6
9
4
0
0
0
['text-generation', 'opt']
true
true
true
8,675
false
# OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-2.7b") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and I am a human being.\nI am a human being, and'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-2.7b", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and I make things. I'm in the creative community, which is"}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-2.7b", do_sample=True, num_return_sequences=5) >>> generator("The woman worked as a") [{'generated_text': "The woman worked as a security guard at a nursery in the city's eastern district of Samut P"}, {'generated_text': 'The woman worked as a doctor in the Philippines. Officials in China allege she stole the coronavirus'}, {'generated_text': 'The woman worked as a teacher in the city of Krasnodar in south Russia. She'}, {'generated_text': 'The woman worked as a researcher and lecturer at the Russian Academy of Sciences in a laboratory dedicated to the'}, {'generated_text': 'The woman worked as a nanny on a property owned by Mr Fitton-Allen in the city'}] ``` compared to: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-2.7b", do_sample=True, num_return_sequences=5) >>> generator("The man worked as a") [{'generated_text': "The man worked as a security guard at a retirement home after being hired by the administrator's cousin,"}, {'generated_text': 'The man worked as a doctor in the Philippines.\n\nHe had hoped to work his way back'}, {'generated_text': 'The man worked as a teacher in the city of Krasnodar in south Russia.He'}, {'generated_text': 'The man worked as a researcher and his work on the topic predates the project, by many years'}, {'generated_text': 'The man worked as a chef in a restaurant for 40 years. How could this be so different from'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
5165e096816fbce297d1155677829a52
devinc/results
devinc
null
13
0
diffusers
0
null
false
false
false
apache-2.0
['en']
['imagefolder']
null
0
0
0
0
0
0
0
[]
false
true
true
1,176
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # results ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/devinc/results/tensorboard?#scalars)
505ca6e824fe9000a0a3474bb3af24f4
abhijeet06793/transformers-abhi
abhijeet06793
t5
7
1
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,323
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # transformers-abhi This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9227 - Validation Loss: 2.5929 - Train Rougel: tf.Tensor(0.19853836, shape=(), dtype=float32) - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:----------------------------------------------:|:-----:| | 2.9227 | 2.5929 | tf.Tensor(0.19853836, shape=(), dtype=float32) | 0 | ### Framework versions - Transformers 4.20.0 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.12.1
d3301bc46c3c84d6695d01ac812d9e78
yoshitomo-matsubara/bert-base-uncased-rte_from_bert-large-uncased-rte
yoshitomo-matsubara
bert
9
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['rte']
null
0
0
0
0
0
0
0
['bert', 'rte', 'glue', 'kd', 'torchdistill']
false
true
true
701
false
`bert-base-uncased` fine-tuned on RTE dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
9aac27a37d09b8f94b7287d4b417bbdf
luigisaetta/whisper-small3-it
luigisaetta
whisper
22
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['it']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'whisper-event']
true
true
true
1,997
false
# Whisper Small3 Italian This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset. It achieves the following results on the evaluation set: - Loss: 0.2307 - Wer: 10.2508 ## Model description This model is a fine-tuning of the OpenAI Whisper Small model, on the specified dataset. ## Intended uses & limitations This model has been developed as part of the Hugging Face Whisper Fine Tuning sprint, December 2022. It is meant to spread the knowledge on how these models are built and can be used to develop solutions where it is needed ASR on the Italian Language. It has not been extensively tested. It is possible that on other datasets the accuracy will be lower. Please, test it before using it. ## Training and evaluation data Trained and tested on Mozilla Common Voice, vers. 11 ## Training procedure The script **run.sh**, and the Python file, used for the training are saved in the repository. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 6000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.226 | 2.01 | 1000 | 0.2494 | 11.3684 | | 0.1017 | 4.02 | 2000 | 0.2403 | 10.6029 | | 0.0491 | 6.03 | 3000 | 0.2549 | 10.9591 | | 0.1102 | 8.04 | 4000 | 0.2307 | 10.2508 | | 0.0384 | 10.05 | 5000 | 0.2592 | 10.5903 | | 0.0285 | 12.06 | 6000 | 0.2537 | 10.5026 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
2ac2b434eabbb9f00a5ae273ab339e8a
questgen/msmarco-distilbert-base-v4-feature-extraction-pipeline
questgen
distilbert
12
0
sentence-transformers
0
feature-extraction
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,564
false
# sentence-transformers/msmarco-distilbert-base-v4 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-v4') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-base-v4') model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-base-v4') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-v4) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
4b487f41769a9a17b38ffc53391089ff
alexgeh196/sentiment_model
alexgeh196
distilbert
13
2
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,031
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3852 - Accuracy: 0.8424 - F1: 0.8398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
20a348b65a127a517753daba4151a741
Isaacp/xlm-roberta-base-finetuned-panx-it
Isaacp
xlm-roberta
10
5
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2401 - F1: 0.8246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8187 | 1.0 | 70 | 0.3325 | 0.7337 | | 0.2829 | 2.0 | 140 | 0.2554 | 0.8003 | | 0.1894 | 3.0 | 210 | 0.2401 | 0.8246 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
64ed349d2b2290f8b3ef7570a50b3357
thanat/mt5-small-finetuned-amazon-en-es
thanat
mt5
9
10
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,717
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # thanat/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the [amazon_reviews_multi](https://huggingface.co/datasets/amazon_reviews_multi) dataset. It achieves the following results on the evaluation set: - Train Loss: 4.0061 - Validation Loss: 3.3257 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.6013 | 4.2024 | 0 | | 5.8556 | 3.7335 | 1 | | 5.0930 | 3.5494 | 2 | | 4.6610 | 3.4502 | 3 | | 4.3874 | 3.4030 | 4 | | 4.2103 | 3.3568 | 5 | | 4.0930 | 3.3311 | 6 | | 4.0061 | 3.3257 | 7 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
d979d7585057c3bb6184e3b38b56f0ac
zoha/wav2vec2-base-timit-google-colab
zoha
wav2vec2
24
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,303
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Wer: 0.3080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5787 | 0.87 | 500 | 1.7648 | 1.0305 | | 0.8692 | 1.73 | 1000 | 0.5136 | 0.5103 | | 0.4346 | 2.6 | 1500 | 0.4364 | 0.4515 | | 0.31 | 3.46 | 2000 | 0.3889 | 0.4070 | | 0.234 | 4.33 | 2500 | 0.4161 | 0.3863 | | 0.2054 | 5.19 | 3000 | 0.3845 | 0.3722 | | 0.165 | 6.06 | 3500 | 0.4035 | 0.3643 | | 0.1436 | 6.92 | 4000 | 0.4090 | 0.3623 | | 0.1381 | 7.79 | 4500 | 0.4007 | 0.3673 | | 0.1175 | 8.65 | 5000 | 0.4588 | 0.3632 | | 0.1052 | 9.52 | 5500 | 0.4441 | 0.3588 | | 0.0988 | 10.38 | 6000 | 0.4133 | 0.3489 | | 0.0877 | 11.25 | 6500 | 0.4758 | 0.3510 | | 0.0856 | 12.11 | 7000 | 0.4454 | 0.3425 | | 0.0731 | 12.98 | 7500 | 0.4252 | 0.3351 | | 0.0712 | 13.84 | 8000 | 0.4163 | 0.3370 | | 0.0711 | 14.71 | 8500 | 0.4166 | 0.3367 | | 0.06 | 15.57 | 9000 | 0.4195 | 0.3347 | | 0.0588 | 16.44 | 9500 | 0.4697 | 0.3367 | | 0.0497 | 17.3 | 10000 | 0.4255 | 0.3314 | | 0.0523 | 18.17 | 10500 | 0.4676 | 0.3307 | | 0.0444 | 19.03 | 11000 | 0.4570 | 0.3244 | | 0.0435 | 19.9 | 11500 | 0.4307 | 0.3243 | | 0.0348 | 20.76 | 12000 | 0.4763 | 0.3245 | | 0.036 | 21.63 | 12500 | 0.4635 | 0.3238 | | 0.0347 | 22.49 | 13000 | 0.4602 | 0.3212 | | 0.0333 | 23.36 | 13500 | 0.4472 | 0.3195 | | 0.0311 | 24.22 | 14000 | 0.4449 | 0.3183 | | 0.0294 | 25.09 | 14500 | 0.4631 | 0.3175 | | 0.025 | 25.95 | 15000 | 0.4466 | 0.3164 | | 0.023 | 26.82 | 15500 | 0.4581 | 0.3138 | | 0.0216 | 27.68 | 16000 | 0.4665 | 0.3114 | | 0.0198 | 28.55 | 16500 | 0.4590 | 0.3092 | | 0.0181 | 29.41 | 17000 | 0.4659 | 0.3080 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
1b655827545c185986b4060d6c99f2ac
gokuls/mobilebert_add_GLUE_Experiment_sst2_128
gokuls
mobilebert
17
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,956
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_sst2_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4543 - Accuracy: 0.7982 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6677 | 1.0 | 527 | 0.6771 | 0.5757 | | 0.5966 | 2.0 | 1054 | 0.7135 | 0.5424 | | 0.5714 | 3.0 | 1581 | 0.7271 | 0.5550 | | 0.5573 | 4.0 | 2108 | 0.6892 | 0.5619 | | 0.501 | 5.0 | 2635 | 0.4546 | 0.7798 | | 0.2856 | 6.0 | 3162 | 0.4613 | 0.8050 | | 0.2288 | 7.0 | 3689 | 0.4543 | 0.7982 | | 0.2027 | 8.0 | 4216 | 0.4662 | 0.7993 | | 0.1883 | 9.0 | 4743 | 0.5168 | 0.8039 | | 0.1779 | 10.0 | 5270 | 0.5748 | 0.7856 | | 0.1691 | 11.0 | 5797 | 0.5196 | 0.8028 | | 0.1596 | 12.0 | 6324 | 0.5943 | 0.7947 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
996e5a53a0f7418c39b28d390b6a65d2
SetFit/distilbert-base-uncased__subj__train-8-7
SetFit
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
4,306
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__subj__train-8-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2766 - Accuracy: 0.8845 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7044 | 1.0 | 3 | 0.6909 | 0.5 | | 0.6678 | 2.0 | 6 | 0.6901 | 0.5 | | 0.6336 | 3.0 | 9 | 0.6807 | 0.5 | | 0.5926 | 4.0 | 12 | 0.6726 | 0.5 | | 0.5221 | 5.0 | 15 | 0.6648 | 0.5 | | 0.4573 | 6.0 | 18 | 0.6470 | 0.5 | | 0.4177 | 7.0 | 21 | 0.6251 | 0.5 | | 0.3252 | 8.0 | 24 | 0.5994 | 0.5 | | 0.2831 | 9.0 | 27 | 0.5529 | 0.5 | | 0.213 | 10.0 | 30 | 0.5078 | 0.75 | | 0.1808 | 11.0 | 33 | 0.4521 | 1.0 | | 0.1355 | 12.0 | 36 | 0.3996 | 1.0 | | 0.1027 | 13.0 | 39 | 0.3557 | 1.0 | | 0.0862 | 14.0 | 42 | 0.3121 | 1.0 | | 0.0682 | 15.0 | 45 | 0.2828 | 1.0 | | 0.0517 | 16.0 | 48 | 0.2603 | 1.0 | | 0.0466 | 17.0 | 51 | 0.2412 | 1.0 | | 0.038 | 18.0 | 54 | 0.2241 | 1.0 | | 0.0276 | 19.0 | 57 | 0.2096 | 1.0 | | 0.0246 | 20.0 | 60 | 0.1969 | 1.0 | | 0.0249 | 21.0 | 63 | 0.1859 | 1.0 | | 0.0201 | 22.0 | 66 | 0.1770 | 1.0 | | 0.018 | 23.0 | 69 | 0.1703 | 1.0 | | 0.0164 | 24.0 | 72 | 0.1670 | 1.0 | | 0.0172 | 25.0 | 75 | 0.1639 | 1.0 | | 0.0135 | 26.0 | 78 | 0.1604 | 1.0 | | 0.014 | 27.0 | 81 | 0.1585 | 1.0 | | 0.0108 | 28.0 | 84 | 0.1569 | 1.0 | | 0.0116 | 29.0 | 87 | 0.1549 | 1.0 | | 0.0111 | 30.0 | 90 | 0.1532 | 1.0 | | 0.0113 | 31.0 | 93 | 0.1513 | 1.0 | | 0.0104 | 32.0 | 96 | 0.1503 | 1.0 | | 0.01 | 33.0 | 99 | 0.1490 | 1.0 | | 0.0079 | 34.0 | 102 | 0.1479 | 1.0 | | 0.0097 | 35.0 | 105 | 0.1466 | 1.0 | | 0.0112 | 36.0 | 108 | 0.1458 | 1.0 | | 0.0091 | 37.0 | 111 | 0.1457 | 1.0 | | 0.0098 | 38.0 | 114 | 0.1454 | 1.0 | | 0.0076 | 39.0 | 117 | 0.1451 | 1.0 | | 0.0085 | 40.0 | 120 | 0.1448 | 1.0 | | 0.0079 | 41.0 | 123 | 0.1445 | 1.0 | | 0.0096 | 42.0 | 126 | 0.1440 | 1.0 | | 0.0081 | 43.0 | 129 | 0.1430 | 1.0 | | 0.0083 | 44.0 | 132 | 0.1424 | 1.0 | | 0.0088 | 45.0 | 135 | 0.1418 | 1.0 | | 0.0077 | 46.0 | 138 | 0.1414 | 1.0 | | 0.0073 | 47.0 | 141 | 0.1413 | 1.0 | | 0.0084 | 48.0 | 144 | 0.1412 | 1.0 | | 0.0072 | 49.0 | 147 | 0.1411 | 1.0 | | 0.0077 | 50.0 | 150 | 0.1411 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
01e16a1ff25e0c55bff3c9c7370995c7
wietsedv/xlm-roberta-base-ft-udpos28-el
wietsedv
xlm-roberta
8
50
transformers
0
token-classification
true
false
false
apache-2.0
['el']
['universal_dependencies']
null
0
0
0
0
0
0
0
['part-of-speech', 'token-classification']
true
true
true
565
false
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Greek This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el") ```
d533807673888f71abddc184580c486a
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2
DrishtiSharma
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['br']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
true
true
true
5,978
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-br-d2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: - Loss: 1.1257 - Wer: 0.4631 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Breton language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00034 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 750 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 14.0379 | 0.68 | 100 | 5.6808 | 1.0 | | 3.9145 | 1.35 | 200 | 3.1970 | 1.0 | | 3.0293 | 2.03 | 300 | 2.9513 | 1.0 | | 2.0927 | 2.7 | 400 | 1.4545 | 0.8887 | | 1.1556 | 3.38 | 500 | 1.0966 | 0.7564 | | 0.9628 | 4.05 | 600 | 0.9808 | 0.7364 | | 0.7869 | 4.73 | 700 | 1.0488 | 0.7355 | | 0.703 | 5.41 | 800 | 0.9500 | 0.6881 | | 0.6657 | 6.08 | 900 | 0.9309 | 0.6259 | | 0.5663 | 6.76 | 1000 | 0.9133 | 0.6357 | | 0.496 | 7.43 | 1100 | 0.9890 | 0.6028 | | 0.4748 | 8.11 | 1200 | 0.9469 | 0.5894 | | 0.4135 | 8.78 | 1300 | 0.9270 | 0.6045 | | 0.3579 | 9.46 | 1400 | 0.8818 | 0.5708 | | 0.353 | 10.14 | 1500 | 0.9244 | 0.5781 | | 0.334 | 10.81 | 1600 | 0.9009 | 0.5638 | | 0.2917 | 11.49 | 1700 | 1.0132 | 0.5828 | | 0.29 | 12.16 | 1800 | 0.9696 | 0.5668 | | 0.2691 | 12.84 | 1900 | 0.9811 | 0.5455 | | 0.25 | 13.51 | 2000 | 0.9951 | 0.5624 | | 0.2467 | 14.19 | 2100 | 0.9653 | 0.5573 | | 0.2242 | 14.86 | 2200 | 0.9714 | 0.5378 | | 0.2066 | 15.54 | 2300 | 0.9829 | 0.5394 | | 0.2075 | 16.22 | 2400 | 1.0547 | 0.5520 | | 0.1923 | 16.89 | 2500 | 1.0014 | 0.5397 | | 0.1919 | 17.57 | 2600 | 0.9978 | 0.5477 | | 0.1908 | 18.24 | 2700 | 1.1064 | 0.5397 | | 0.157 | 18.92 | 2800 | 1.0629 | 0.5238 | | 0.159 | 19.59 | 2900 | 1.0642 | 0.5321 | | 0.1652 | 20.27 | 3000 | 1.0207 | 0.5328 | | 0.141 | 20.95 | 3100 | 0.9948 | 0.5312 | | 0.1417 | 21.62 | 3200 | 1.0338 | 0.5328 | | 0.1514 | 22.3 | 3300 | 1.0513 | 0.5313 | | 0.1365 | 22.97 | 3400 | 1.0357 | 0.5291 | | 0.1319 | 23.65 | 3500 | 1.0587 | 0.5167 | | 0.1298 | 24.32 | 3600 | 1.0636 | 0.5236 | | 0.1245 | 25.0 | 3700 | 1.1367 | 0.5280 | | 0.1114 | 25.68 | 3800 | 1.0633 | 0.5200 | | 0.1088 | 26.35 | 3900 | 1.0495 | 0.5210 | | 0.1175 | 27.03 | 4000 | 1.0897 | 0.5095 | | 0.1043 | 27.7 | 4100 | 1.0580 | 0.5309 | | 0.0951 | 28.38 | 4200 | 1.0448 | 0.5067 | | 0.1011 | 29.05 | 4300 | 1.0665 | 0.5137 | | 0.0889 | 29.73 | 4400 | 1.0579 | 0.5026 | | 0.0833 | 30.41 | 4500 | 1.0740 | 0.5037 | | 0.0889 | 31.08 | 4600 | 1.0933 | 0.5083 | | 0.0784 | 31.76 | 4700 | 1.0715 | 0.5089 | | 0.0767 | 32.43 | 4800 | 1.0658 | 0.5049 | | 0.0769 | 33.11 | 4900 | 1.1118 | 0.4979 | | 0.0722 | 33.78 | 5000 | 1.1413 | 0.4986 | | 0.0709 | 34.46 | 5100 | 1.0706 | 0.4885 | | 0.0664 | 35.14 | 5200 | 1.1217 | 0.4884 | | 0.0648 | 35.81 | 5300 | 1.1298 | 0.4941 | | 0.0657 | 36.49 | 5400 | 1.1330 | 0.4920 | | 0.0582 | 37.16 | 5500 | 1.0598 | 0.4835 | | 0.0602 | 37.84 | 5600 | 1.1097 | 0.4943 | | 0.0598 | 38.51 | 5700 | 1.0976 | 0.4876 | | 0.0547 | 39.19 | 5800 | 1.0734 | 0.4825 | | 0.0561 | 39.86 | 5900 | 1.0926 | 0.4850 | | 0.0516 | 40.54 | 6000 | 1.1579 | 0.4751 | | 0.0478 | 41.22 | 6100 | 1.1384 | 0.4706 | | 0.0396 | 41.89 | 6200 | 1.1462 | 0.4739 | | 0.0472 | 42.57 | 6300 | 1.1277 | 0.4732 | | 0.0447 | 43.24 | 6400 | 1.1517 | 0.4752 | | 0.0423 | 43.92 | 6500 | 1.1219 | 0.4784 | | 0.0426 | 44.59 | 6600 | 1.1311 | 0.4724 | | 0.0391 | 45.27 | 6700 | 1.1135 | 0.4692 | | 0.0362 | 45.95 | 6800 | 1.0878 | 0.4645 | | 0.0329 | 46.62 | 6900 | 1.1137 | 0.4668 | | 0.0356 | 47.3 | 7000 | 1.1233 | 0.4687 | | 0.0328 | 47.97 | 7100 | 1.1238 | 0.4653 | | 0.0323 | 48.65 | 7200 | 1.1307 | 0.4646 | | 0.0325 | 49.32 | 7300 | 1.1242 | 0.4645 | | 0.03 | 50.0 | 7400 | 1.1257 | 0.4631 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
50eca9ff7ff6cdcdfc87a4fb3ae66e17
garyw/clinical-embeddings-100d-gl-oa-all
garyw
null
3
0
null
0
null
false
false
false
gpl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,540
false
Pre-trained word embeddings using the text of published biomedical manuscripts. These embeddings use 100 dimensions and were trained using the GloVe algorithm on all published manuscripts found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/ Citation: ``` @article{flamholz2022word, title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information}, author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E}, journal={Journal of Biomedical Informatics}, volume={125}, pages={103971}, year={2022}, publisher={Elsevier} } ``` ## Quick start Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format. First download the files from this archive. Then load the embeddings into Python. ```python from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models # Load the model model = KeyedVectors.load_word2vec_format('gl_100_oa_all.txt') # Return 100-dimensional vector representations of each word model.word_vec('diabetes') model.word_vec('cardiac_arrest') model.word_vec('lymphangioleiomyomatosis') # Try out cosine similarity model.similarity('copd', 'chronic_obstructive_pulmonary_disease') model.similarity('myocardial_infarction', 'heart_attack') model.similarity('lymphangioleiomyomatosis', 'lam') ```
d53d35960243b6d04fae3b04eb1ac22b
azizbarank/mbert-finetuned-azerbaijani-ner
azizbarank
bert
13
9
transformers
1
token-classification
true
false
false
apache-2.0
null
['wikiann']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,558
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-finetuned-azerbaijani-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1385 - Precision: 0.8899 - Recall: 0.9154 - F1: 0.9025 - Accuracy: 0.9669 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2928 | 1.0 | 625 | 0.1415 | 0.8584 | 0.8918 | 0.8748 | 0.9595 | | 0.1254 | 2.0 | 1250 | 0.1335 | 0.8875 | 0.9119 | 0.8996 | 0.9637 | | 0.077 | 3.0 | 1875 | 0.1385 | 0.8899 | 0.9154 | 0.9025 | 0.9669 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
3131c1448c37ed6edd95bbed714cbf6f
kyoumiaoi/wav2vec2-base-timit-demo-google-colab
kyoumiaoi
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,998
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5499 - Wer: 0.3435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.599 | 1.0 | 500 | 2.1267 | 0.9976 | | 1.016 | 2.01 | 1000 | 0.6193 | 0.5443 | | 0.5299 | 3.01 | 1500 | 0.5324 | 0.4889 | | 0.3626 | 4.02 | 2000 | 0.4525 | 0.4402 | | 0.2854 | 5.02 | 2500 | 0.4266 | 0.4233 | | 0.2373 | 6.02 | 3000 | 0.4713 | 0.4082 | | 0.1979 | 7.03 | 3500 | 0.4778 | 0.4018 | | 0.1761 | 8.03 | 4000 | 0.4585 | 0.3947 | | 0.1537 | 9.04 | 4500 | 0.5297 | 0.3946 | | 0.1379 | 10.04 | 5000 | 0.4988 | 0.3856 | | 0.124 | 11.04 | 5500 | 0.5262 | 0.3852 | | 0.11 | 12.05 | 6000 | 0.5545 | 0.3854 | | 0.106 | 13.05 | 6500 | 0.5196 | 0.3805 | | 0.0918 | 14.06 | 7000 | 0.4515 | 0.3655 | | 0.0829 | 15.06 | 7500 | 0.5087 | 0.3722 | | 0.0775 | 16.06 | 8000 | 0.4980 | 0.3781 | | 0.0685 | 17.07 | 8500 | 0.5564 | 0.3650 | | 0.0655 | 18.07 | 9000 | 0.5323 | 0.3672 | | 0.0578 | 19.08 | 9500 | 0.5675 | 0.3637 | | 0.052 | 20.08 | 10000 | 0.5604 | 0.3664 | | 0.0512 | 21.08 | 10500 | 0.5922 | 0.3804 | | 0.0431 | 22.09 | 11000 | 0.6379 | 0.3754 | | 0.0428 | 23.09 | 11500 | 0.5905 | 0.3764 | | 0.0393 | 24.1 | 12000 | 0.5667 | 0.3542 | | 0.0326 | 25.1 | 12500 | 0.5612 | 0.3537 | | 0.0289 | 26.1 | 13000 | 0.5618 | 0.3475 | | 0.0298 | 27.11 | 13500 | 0.5578 | 0.3439 | | 0.0264 | 28.11 | 14000 | 0.5547 | 0.3433 | | 0.026 | 29.12 | 14500 | 0.5499 | 0.3435 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
a9f61296249c530710d4d2752e59942b
IIIT-L/muril-base-cased-finetuned-combined-DS
IIIT-L
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,382
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # muril-base-cased-finetuned-combined-DS This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5291 - Accuracy: 0.6657 - Precision: 0.6355 - Recall: 0.6275 - F1: 0.6294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9961 | 2.0 | 711 | 0.9148 | 0.5625 | 0.5495 | 0.5636 | 0.5265 | | 0.8211 | 3.99 | 1422 | 0.8542 | 0.6096 | 0.6023 | 0.6071 | 0.5928 | | 0.6667 | 5.99 | 2133 | 0.8459 | 0.6601 | 0.6366 | 0.6379 | 0.6361 | | 0.5272 | 7.99 | 2844 | 0.9667 | 0.6517 | 0.6190 | 0.6223 | 0.6201 | | 0.4327 | 9.99 | 3555 | 1.0185 | 0.6503 | 0.6351 | 0.6222 | 0.6229 | | 0.3608 | 11.98 | 4266 | 1.1409 | 0.6313 | 0.6053 | 0.6100 | 0.6049 | | 0.3038 | 13.98 | 4977 | 1.2336 | 0.6601 | 0.6287 | 0.6269 | 0.6273 | | 0.2631 | 15.98 | 5688 | 1.3151 | 0.6503 | 0.6199 | 0.6167 | 0.6177 | | 0.2368 | 17.97 | 6399 | 1.4230 | 0.6594 | 0.6315 | 0.6233 | 0.6251 | | 0.2093 | 19.97 | 7110 | 1.4881 | 0.6629 | 0.6332 | 0.6220 | 0.6239 | | 0.1968 | 21.97 | 7821 | 1.5003 | 0.6559 | 0.6279 | 0.6230 | 0.6242 | | 0.1824 | 23.97 | 8532 | 1.5291 | 0.6657 | 0.6355 | 0.6275 | 0.6294 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
1f981d80e1293c7b477179af49ea1e98
shivkumarganesh/whisper-small-uz-v1
shivkumarganesh
whisper
22
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['uz']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,746
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Uzbek This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 uz dataset. It achieves the following results on the evaluation set: - Loss: 0.4357 - Wer: 25.7857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3621 | 1.03 | 1000 | 0.4819 | 32.3209 | | 0.2378 | 2.07 | 2000 | 0.4413 | 29.0077 | | 0.2342 | 4.01 | 3000 | 0.4224 | 27.3939 | | 0.1286 | 5.04 | 4000 | 0.4357 | 25.7857 | | 0.1192 | 6.08 | 5000 | 0.4727 | 27.2752 | | 0.0147 | 8.02 | 6000 | 0.5230 | 26.7267 | | 0.0425 | 9.05 | 7000 | 0.5336 | 26.3628 | | 0.0059 | 10.08 | 8000 | 0.5658 | 26.8476 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
0c42e75d5d43b57955254ec419044a0b
DCU-NLP/electra-base-irish-cased-generator-v1
DCU-NLP
electra
6
2
transformers
0
fill-mask
true
false
false
apache-2.0
['ga']
null
null
0
0
0
0
0
0
0
['irish', 'electra']
false
true
true
1,505
false
# gaELECTRA [gaELECTRA](https://arxiv.org/abs/2107.12930) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model. ### Limitations and bias Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations. ### BibTeX entry and citation info If you use this model in your research, please consider citing our paper: ``` @article{DBLP:journals/corr/abs-2107-12930, author = {James Barry and Joachim Wagner and Lauren Cassidy and Alan Cowap and Teresa Lynn and Abigail Walsh and M{\'{\i}}che{\'{a}}l J. {\'{O}} Meachair and Jennifer Foster}, title = {gaBERT - an Irish Language Model}, journal = {CoRR}, volume = {abs/2107.12930}, year = {2021}, url = {https://arxiv.org/abs/2107.12930}, archivePrefix = {arXiv}, eprint = {2107.12930}, timestamp = {Fri, 30 Jul 2021 13:03:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-12930.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
acb4ec82ef6c85943f5056a53c957853
Deep98/Web_browser-clustered
Deep98
distilbert
8
0
transformers
0
question-answering
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,857
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Deep98/Web_browser-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1604 - Train End Logits Accuracy: 0.9826 - Train Start Logits Accuracy: 0.9375 - Validation Loss: 0.0757 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 1.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1604 | 0.9826 | 0.9375 | 0.0757 | 1.0 | 1.0 | 0 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
bec220e31c65ca6faa652ee81a7d383f
Gourieff/p-AI-nter_v0.2
Gourieff
null
20
5
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
1,722
false
## p-AI-nter -- v0.2 Core model is SD-1.5, trained on artworks of different painters (Rob Hefferan, Anna Marinova, Omar Ortiz, Thomas Saliot, Serge Marshennikov). Use the token 'oil painting' in your prompts for better effect. > Trained with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook. Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb). Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). ## Sample pictures: ![0](https://huggingface.co/Gourieff/p-AI-nter_v0.2/resolve/main/sample_images/01.jpg) ## Prompt and settings for samples ``` (portrait photo)++ of (young)+ woman on river bank, dressed in silk shirt, golden and white and bronze color scheme, (oil painting)+, (epic composition)+, intricate, Highly Detailed, Sharp focus, dramatic light, (high bun black hair)++, (bokeh)+, (deep eyes)+, (sunset)++, (model pose)+, (ideal hands)++, (ray tracing)++, (cleavage)+, (ideal breast)+ ``` __negative:__ ``` Deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, extra limb, ugly, poorly drawn hands, missing limb, blurry, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, mutated hands and fingers, fat, overweight, multiple heads, group of people, three or more legs, cross-eye, nude, naked, naked, (extra fingers)+, (fused fingers)+ ``` * Steps: 50 * Scale: 9 * Sampler: Euler_A - - -
a40c886227dd5de163a6626932cb9511
jonatasgrosman/exp_w2v2t_ar_vp-fr_s957
jonatasgrosman
wav2vec2
10
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ar']
false
true
true
469
false
# exp_w2v2t_ar_vp-fr_s957 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
91f98dbb7155b4d0a0efbc85ed9459c9
DOOGLAK/Tagged_Uni_100v0_NER_Model_3Epochs_AUGMENTED
DOOGLAK
bert
13
6
transformers
0
token-classification
true
false
false
apache-2.0
null
['tagged_uni100v0_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,565
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_100v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4601 - Precision: 0.1802 - Recall: 0.0830 - F1: 0.1137 - Accuracy: 0.8143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 33 | 0.5687 | 0.0882 | 0.0015 | 0.0030 | 0.7791 | | No log | 2.0 | 66 | 0.5410 | 0.1319 | 0.0270 | 0.0448 | 0.7946 | | No log | 3.0 | 99 | 0.4601 | 0.1802 | 0.0830 | 0.1137 | 0.8143 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
f123b75061123d677594a1295c0c950f
facebook/maskformer-swin-tiny-coco
facebook
maskformer
5
770
transformers
0
image-segmentation
true
false
false
other
null
['coco']
null
0
0
0
0
0
0
0
['vision', 'image-segmentation']
false
true
true
2,521
false
# MaskFormer MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests # load MaskFormer fine-tuned on COCO panoptic segmentation feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-coco") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
9497358c6e24b6a499455ad6d045a506
Habana/albert-xxlarge-v1
Habana
null
3
1,443
null
0
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,298
false
[Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks. Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana). ## ALBERT XXLarge model HPU configuration This model only contains the `GaudiConfig` file for running the [albert-xxlarge-v1](https://huggingface.co/albert-xxlarge-v1) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP) - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation - `hmp_bf16_ops`: list of operators that should run in bf16 - `hmp_fp32_ops`: list of operators that should run in fp32 - `hmp_is_verbose`: verbosity - `use_fused_adam`: whether to use Habana's custom AdamW implementation - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator ## Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with ALBERT XXL with the following command: ```bash python run_qa.py \ --model_name_or_path albert-xxlarge-v1 \ --gaudi_config_name Habana/albert-xxlarge-v1 \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --per_device_eval_batch_size 2 \ --learning_rate 5e-6 \ --num_train_epochs 2 \ --max_seq_length 384 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 2 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
dff18465b440785fb994447440efcac9
FluxML/resnet101
FluxML
null
3
0
null
0
null
false
false
false
mit
null
null
null
3
0
3
0
0
0
0
[]
false
true
true
520
false
ResNet101 model ported from [torchvision](https://pytorch.org/vision/stable/index.html) for use with [Metalhead.jl](https://github.com/FluxML/Metalhead.jl). The scripts for creating this file can be found at [this gist](https://gist.github.com/darsnack/bfb8594cf5fdc702bdacb66586f518ef). To use this model in Julia, [add the Metalhead.jl package to your environment](https://pkgdocs.julialang.org/v1/managing-packages/#Adding-packages). Then execute: ```julia using Metalhead model = ResNet(101; pretrain = true) ```
a4a0cb6631e7ac1a71e179052c16ed5f
arun100/whisper-small-zu_za
arun100
whisper
26
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['google/fleurs']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,358
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Zulu This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs zu_za dataset. It achieves the following results on the evaluation set: - Loss: 1.1143 - Wer: 56.7866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.6219 | 9.01 | 100 | 1.0758 | 62.0201 | | 0.0318 | 18.01 | 200 | 1.1143 | 56.7866 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
8d49227bf9cb8d9da8e95f02f826e4c9
nateraw/my-aurora
nateraw
null
8
1
diffusers
0
null
false
false
false
apache-2.0
['en']
['aurora']
null
0
0
0
0
0
0
0
['🧨 Diffuse It']
false
true
true
1,176
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # my-aurora ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `aurora` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/nateraw/my-aurora/tensorboard?#scalars)
c49d4ad7027dbecc90008523fed6d9e5
Helsinki-NLP/opus-mt-tc-base-hu-uk
Helsinki-NLP
marian
13
3
transformers
0
translation
true
true
false
cc-by-4.0
['hu', 'uk']
null
null
1
0
1
0
0
0
0
['translation', 'opus-mt-tc']
true
true
true
5,261
false
# opus-mt-tc-base-hu-uk Neural machine translation model for translating from Hungarian (hu) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-08 * source language(s): hun * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.zip) * more information released models: [OPUS-MT hun-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-ukr/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "1000 dollárral tartozom neked.", "Vizet iszom." ] model_name = "pytorch-models/opus-mt-tc-base-hu-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Я зобов'язаний вам 1000 доларів. # Я п'ю воду. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-hu-uk") print(pipe("1000 dollárral tartozom neked.")) # expected output: Я зобов'язаний вам 1000 доларів. ``` ## Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | hun-ukr | tatoeba-test-v2021-08-07 | 0.61006 | 38.1 | 473 | 2606 | | hun-ukr | flores101-devtest | 0.49490 | 19.8 | 1012 | 22810 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 1bdabf7 * port time: Thu Mar 24 02:19:16 EET 2022 * port machine: LM0-400-22516.local
f92815e2f36c983b7c9431abf838d494
muhtasham/medium-mlm-imdb-target-tweet
muhtasham
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['tweet_eval']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,478
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6869 - Accuracy: 0.7620 - F1: 0.7599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.456 | 4.9 | 500 | 0.8890 | 0.7754 | 0.7720 | | 0.0578 | 9.8 | 1000 | 1.3492 | 0.7540 | 0.7509 | | 0.0173 | 14.71 | 1500 | 1.6143 | 0.7594 | 0.7584 | | 0.0124 | 19.61 | 2000 | 1.6869 | 0.7620 | 0.7599 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
4eba92c1ecde1e9635bad9cbcbf8737c
minhhoque/vit-base-patch16-224-in21k-finetuned-cifar10-test
minhhoque
vit
7
1
transformers
0
image-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,069
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-cifar10-test This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.7.1 - Tokenizers 0.13.2
59983a1dca658f15adbe5c7629ec4cbf
jiobiala24/wav2vec2-base-checkpoint-8
jiobiala24
wav2vec2
13
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,359
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-8 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-7.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-7.1) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9561 - Wer: 0.3271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3117 | 1.59 | 1000 | 0.5514 | 0.3451 | | 0.2509 | 3.19 | 2000 | 0.5912 | 0.3328 | | 0.1918 | 4.78 | 3000 | 0.6103 | 0.3346 | | 0.1612 | 6.38 | 4000 | 0.6469 | 0.3377 | | 0.1388 | 7.97 | 5000 | 0.6597 | 0.3391 | | 0.121 | 9.57 | 6000 | 0.6911 | 0.3472 | | 0.1096 | 11.16 | 7000 | 0.7300 | 0.3457 | | 0.0959 | 12.76 | 8000 | 0.7660 | 0.3400 | | 0.0882 | 14.35 | 9000 | 0.8316 | 0.3394 | | 0.0816 | 15.95 | 10000 | 0.8042 | 0.3357 | | 0.0739 | 17.54 | 11000 | 0.8087 | 0.3346 | | 0.0717 | 19.14 | 12000 | 0.8590 | 0.3353 | | 0.066 | 20.73 | 13000 | 0.8750 | 0.3336 | | 0.0629 | 22.33 | 14000 | 0.8759 | 0.3333 | | 0.0568 | 23.92 | 15000 | 0.8963 | 0.3321 | | 0.0535 | 25.52 | 16000 | 0.9391 | 0.3323 | | 0.0509 | 27.11 | 17000 | 0.9279 | 0.3296 | | 0.0498 | 28.71 | 18000 | 0.9561 | 0.3271 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
f1b6f51dd820b32965ed0c3235997d0c
AlekseyKorshuk/6.7b-dalio-book-handwritten-io-constant-1e-6-v2
AlekseyKorshuk
opt
13
2
transformers
0
text-generation
true
false
false
other
null
['AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,061
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-book-handwritten-io-constant-1e-6-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.4238 - Accuracy: 0.2793 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5852 | 0.08 | 6 | 2.5957 | 0.2697 | | 2.5956 | 0.16 | 12 | 2.5762 | 0.2706 | | 2.5961 | 0.24 | 18 | 2.5547 | 0.2711 | | 2.5731 | 0.32 | 24 | 2.5312 | 0.2722 | | 2.5415 | 0.4 | 30 | 2.5117 | 0.2734 | | 2.5168 | 0.48 | 36 | 2.4961 | 0.2746 | | 2.4972 | 0.56 | 42 | 2.4824 | 0.2756 | | 2.4354 | 0.64 | 48 | 2.4727 | 0.2761 | | 2.4055 | 0.72 | 54 | 2.4609 | 0.2768 | | 2.4681 | 0.8 | 60 | 2.4492 | 0.2778 | | 2.5866 | 0.88 | 66 | 2.4355 | 0.2784 | | 2.4221 | 0.96 | 72 | 2.4238 | 0.2793 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ccf503f4fec2a774b7e6a1d11398d8ee
sd-concepts-library/sas-style
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,083
false
### SAS style on Stable Diffusion This is the `<smooth-aesthetic-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<smooth-aesthetic-style> 0](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/3.jpeg) ![<smooth-aesthetic-style> 1](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/1.jpeg) ![<smooth-aesthetic-style> 2](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/0.jpeg) ![<smooth-aesthetic-style> 3](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/2.jpeg)
6ae9c2512795a8445121bcaf0976966b
Suniljl/shru
Suniljl
null
18
10
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
413
false
### shru Dreambooth model trained by Suniljl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
2ecbfa824cbdb31fb138ce5cf914be09
coreml/coreml-RPG
coreml
null
8
0
null
4
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
2
1
1
['coreml', 'stable-diffusion', 'text-to-image']
false
true
true
4,424
false
# Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br> - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br> - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> - `original` version is only compatible with CPU & GPU option.<br> # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). # RPG: Source(s): [Hugging Face](https://huggingface.co/Anashel/rpg) - [CivitAI](https://civitai.com/models/1116/rpg) **Latest Update: Feb 5th, 2023** - Version 4.0 is live **[available here](https://huggingface.co/Anashel/rpg/tree/main/RPG-V4-Model-Download)** - New Prompt User Guide for RPG v4 **[Download Now](https://huggingface.co/Anashel/rpg/resolve/main/RPG-V4-Model-Download/RPG-Guide-v4.pdf)** ## Contribute If you wish to support the prompt research on this project. - Rate RPG V4 on **[CivitAI](https://civitai.com/models/1116/rpg)** - Donate (ETH Only): anashel.eth | 0xc4055f3c65D01a48Bc47bE87751794eA9f42E367 ## Future Updates I am in the process of writing a detailed guide with a list of word you can switch easily in the main prompt. Ex: Blood Elf Knight, Female Death Knight Mage, etc... In the meantime, fell free to share your creation on my *[Discord Server](https://discord.gg/7CGDRjDz7P)* --- ## RPG v4 Render Sample ![07.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655387859-631ba4758de8e645af703f33.jpeg) ![03.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655391409-631ba4758de8e645af703f33.jpeg) ![02.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655393058-631ba4758de8e645af703f33.jpeg) ![05.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655429420-631ba4758de8e645af703f33.jpeg) ![04.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655446594-631ba4758de8e645af703f33.jpeg) ![01.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655485563-631ba4758de8e645af703f33.jpeg) --- **How to reach me** - Reddit: [u/Anashel](https://www.reddit.com/user/anashel) - Discord: [RPG V3 Channel](https://discord.gg/rDrhtWZk8u) ---- ## RPG v3 Render Sample ![01.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979006989-631ba4758de8e645af703f33.jpeg) ![02.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979015000-631ba4758de8e645af703f33.jpeg) ![03.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979010769-631ba4758de8e645af703f33.jpeg) ![04.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979024887-631ba4758de8e645af703f33.jpeg) ![05.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979028290-631ba4758de8e645af703f33.jpeg) ## RPG v2 Render Sample Genereated with RPG V2. [Available here](https://huggingface.co/Anashel/rpg/tree/main/All-Concept-Zip-Format) ![Cover-01.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337224-631ba4758de8e645af703f33.jpeg) ![Cover-02.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337238-631ba4758de8e645af703f33.jpeg) ![Cover-03.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337256-631ba4758de8e645af703f33.jpeg) ![Cover-04.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337271-631ba4758de8e645af703f33.jpeg) ---- ## OTHER EXAMPLE ![02.png](https://s3.amazonaws.com/moonup/production/uploads/1669621805120-631ba4758de8e645af703f33.png) ![03.png](https://s3.amazonaws.com/moonup/production/uploads/1669621861406-631ba4758de8e645af703f33.png) ![04.png](https://s3.amazonaws.com/moonup/production/uploads/1669621871167-631ba4758de8e645af703f33.png) ![05.png](https://s3.amazonaws.com/moonup/production/uploads/1669621878493-631ba4758de8e645af703f33.png) ![06.png](https://s3.amazonaws.com/moonup/production/uploads/1669621914034-631ba4758de8e645af703f33.png) ![07.png](https://s3.amazonaws.com/moonup/production/uploads/1669621922049-631ba4758de8e645af703f33.png) ![08.png](https://s3.amazonaws.com/moonup/production/uploads/1669621929158-631ba4758de8e645af703f33.png)
1932c0226bf1808603ed5cd8315146b5
jonatasgrosman/exp_w2v2t_uk_vp-fr_s473
jonatasgrosman
wav2vec2
10
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['uk']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'uk']
false
true
true
469
false
# exp_w2v2t_uk_vp-fr_s473 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0c8a1d48bb2581ec72f6c9e85dc50b2a
yousef22/output
yousef22
wav2vec2
7
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
966
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0 - Pytorch 1.13.0+cu116 - Datasets 1.18.3 - Tokenizers 0.13.2
d3f1a6e7c9bdaf0120e41559036d541b
sentence-transformers/nli-distilbert-base
sentence-transformers
distilbert
13
29
sentence-transformers
0
sentence-similarity
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,775
false
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/nli-distilbert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/nli-distilbert-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-distilbert-base') model = AutoModel.from_pretrained('sentence-transformers/nli-distilbert-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-distilbert-base) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
538f029da4a621607de03c8a8269ca95