repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
Suva/uptag-url-model-v2
Suva
t5
8
1
transformers
0
text2text-generation
true
false
false
mit
null
['arxiv']
null
0
0
0
0
0
0
0
[]
false
true
true
2,044
false
## Usage: ```python abstract = """We describe a system called Overton, whose main design goal is to support engineers in building, monitoring, and improving production machine learning systems. Key challenges engineers face are monitoring fine-grained quality, diagnosing errors in sophisticated applications, and handling contradictory or incomplete supervision data. Overton automates the life cycle of model construction, deployment, and monitoring by providing a set of novel high-level, declarative abstractions. Overton's vision is to shift developers to these higher-level tasks instead of lower-level machine learning tasks. In fact, using Overton, engineers can build deep-learning-based applications without writing any code in frameworks like TensorFlow. For over a year, Overton has been used in production to support multiple applications in both near-real-time applications and back-of-house processing. In that time, Overton-based applications have answered billions of queries in multiple languages and processed trillions of records reducing errors 1.7-2.9 times versus production systems. """ ``` ### Using Transformers🤗 ```python model_name = "Suva/uptag-url-model-v2" from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) input_ids = tokenizer.encode("summarize: " + abstract, return_tensors="pt", add_special_tokens=True) generated_ids = model.generate(input_ids=input_ids,num_beams=5,max_length=100,repetition_penalty=2.5,length_penalty=1,early_stopping=True,num_return_sequences=3) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids] print(preds) # output ["Overton: Building, Deploying, and Monitoring Machine Learning Systems for Engineers", "Overton: A System for Building, Monitoring, and Improving Production Machine Learning Systems", "Overton: Building, Monitoring, and Improving Production Machine Learning Systems"] ```
7592e65c2a78bf0b26c6b57ed48e771c
yanaiela/roberta-base-epoch_83
yanaiela
roberta
9
4
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_83']
false
true
true
2,102
false
# RoBERTa, Intermediate Checkpoint - Epoch 83 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_83. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
753f30d2ed0117d5f94a8d4868e7df5f
google/deeplabv3_mobilenet_v2_1.0_513
google
mobilenet_v2
5
1,598
transformers
0
image-segmentation
true
false
false
other
null
['pascal-voc']
null
0
0
0
0
0
0
0
['vision', 'image-segmentation']
false
true
true
2,546
false
# MobileNetV2 with DeepLabV3+ MobileNet V2 model pre-trained on PASCAL VOC at resolution 513x513. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/deeplab). Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md): > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. The model in this repo adds a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head to the MobileNetV2 backbone for semantic segmentation. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, AutoModelForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) preprocessor = AutoImageProcessor.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513") model = AutoModelForSemanticSegmentation.from_pretrained("google/deeplabv3_mobilenet_v2_1.0_513") inputs = preprocessor(images=image, return_tensors="pt") outputs = model(**inputs) predicted_mask = preprocessor.post_process_semantic_segmentation(outputs) ``` Currently, both the feature extractor and model support PyTorch. ### BibTeX entry and citation info ```bibtex @inproceedings{deeplabv3plus2018, title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation}, author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam}, booktitle={ECCV}, year={2018} } ```
6bd11781a5accf273be4701ac7c0ab30
kasukanra/linebrush-style
kasukanra
null
18
3
diffusers
2
null
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
[]
false
true
true
1,091
false
This is a fine-tuned Stable Diffusion model (v1.5) trained on images with an ink brush or heavy lineart style from various fantasy concept illustrators and designers. Use the tokens **DBlinebrush style** in your prompts for the effect. Download the checkpoint file (.ckpt) to use it. If you want to **reproduce** the images below, I've prepared a more detailed version of the prompt settings and seed values in this **Github Gist**: https://gist.github.com/kudou-reira/eadf52cef156eb566cff886221823748 Example settings: Steps: 50, Sampler: Euler, CFG scale: 7, Size: 512x512 Example prompt: **DBlinebrush style**, masterpiece, 1girl, beautiful portrait of an anime female adventurer, monochrome Some example images in txt2img (**very minimal editing/cleanup**): ![Fantasy Art Style](https://huggingface.co/kasukanra/linebrush-style/resolve/main/linebrush_style_01.png) ![Fantasy Art Style](https://huggingface.co/kasukanra/linebrush-style/resolve/main/linebrush_style_02.png) ![Fantasy Art Style](https://huggingface.co/kasukanra/linebrush-style/resolve/main/linebrush_style_03.png)
10cb5a1e8a2e5f782abb79f7ace5f5ce
projecte-aina/roberta-base-ca-v2
projecte-aina
roberta
10
14
transformers
1
fill-mask
true
false
false
apache-2.0
['ca']
null
null
0
0
0
0
0
0
0
['catalan', 'masked-lm', 'RoBERTa-base-ca-v2', 'CaText', 'Catalan Textual Corpus']
false
true
true
10,647
false
# Catalan BERTa-v2 (roberta-base-ca-v2) base model ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [CLUB benchmark](#club-benchmark) - [Evaluation results](#evaluation-results) - [Licensing Information](#licensing-information) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-ca-v2** is a transformer-based masked language model for the Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. ## Intended uses and limitations **roberta-base-ca-v2** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. ## How to use Here is how to use this model: ```python from transformers import AutoModelForMaskedLM from transformers import AutoTokenizer, FillMaskPipeline from pprint import pprint tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-base-ca-v2') model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-base-ca-v2') model.eval() pipeline = FillMaskPipeline(model, tokenizer_hf) text = f"Em dic <mask>." res_hf = pipeline(text) pprint([r['token_str'] for r in res_hf]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data The training corpus consists of several corpora gathered from web crawling and public corpora. | Corpus | Size in GB | |-------------------------|------------| | Catalan Crawling | 13.00 | | Wikipedia | 1.10 | | DOGC | 0.78 | | Catalan Open Subtitles | 0.02 | | Catalan Oscar | 4.00 | | CaWaC | 3.60 | | Cat. General Crawling | 2.50 | | Cat. Goverment Crawling | 0.24 | | ACN | 0.42 | | Padicat | 0.63 | | RacoCatalá | 8.10 | | Nació Digital | 0.42 | | Vilaweb | 0.06 | | Tweets | 0.02 | ### Training procedure The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 50,262 tokens. The RoBERTa-ca-v2 pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 96 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. ## Evaluation ### CLUB benchmark The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: 1. Named Entity Recognition (NER) **[NER (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format 2. Part-of-Speech Tagging (POS) **[POS (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus. 3. Text Classification (TC) **[TeCla](https://huggingface.co/datasets/projecte-aina/tecla)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus, with 30 labels. 4. Textual Entailment (TE) **[TE-ca](https://huggingface.co/datasets/projecte-aina/teca)**: consisting of 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction, or neutral), extracted from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus). 5. Semantic Textual Similarity (STS) **[STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus). 6. Question Answering (QA): **[VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad)**: contains 6,282 pairs of questions and answers, outsourced from 2095 Catalan language articles from VilaWeb newswire text. **[ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. **[CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa)**: an aggregation of 2 previous datasets (VilaQuAD and ViquiQuAD), 21,427 pairs of Q/A balanced by type of question, containing one question and one answer per context, although the contexts can repeat multiple times. **[XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_. Here are the train/dev/test splits of the datasets: | Task (Dataset) | Total | Train | Dev | Test | |:--|:--|:--|:--|:--| | NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 | | POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 | | STS (STS-ca) | 3,073 | 2,073 | 500 | 500 | | TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786| | TE (TE-ca) | 21,163 | 16,930 | 2,116 | 2,117 | QA (VilaQuAD) | 6,282 | 3,882 | 1,200 | 1,200 | | QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 | | QA (CatalanQA) | 21,427 | 17,135 | 2,157 | 2,135 | ### Evaluation results | Task | NER (F1) | POS (F1) | STS-ca (Comb) | TeCla (Acc.) | TEca (Acc.) | VilaQuAD (F1/EM)| ViquiQuAD (F1/EM) | CatalanQA (F1/EM) | XQuAD-ca <sup>1</sup> (F1/EM) | | ------------|:-------------:| -----:|:------|:------|:-------|:------|:----|:----|:----| | RoBERTa-large-ca-v2 | **89.82** | **99.02** | **83.41** | **75.46** | **83.61** | **89.34/75.50** | **89.20**/75.77 | **90.72/79.06** | **73.79**/55.34 | | RoBERTa-base-ca-v2 | 89.29 | 98.96 | 79.07 | 74.26 | 83.14 | 87.74/72.58 | 88.72/**75.91** | 89.50/76.63 | 73.64/**55.42** | | BERTa | 89.76 | 98.96 | 80.19 | 73.65 | 79.26 | 85.93/70.58 | 87.12/73.11 | 89.17/77.14 | 69.20/51.47 | | mBERT | 86.87 | 98.83 | 74.26 | 69.90 | 74.63 | 82.78/67.33 | 86.89/73.53 | 86.90/74.19 | 68.79/50.80 | | XLM-RoBERTa | 86.31 | 98.89 | 61.61 | 70.14 | 33.30 | 86.29/71.83 | 86.88/73.11 | 88.17/75.93 | 72.55/54.16 | <sup>1</sup> : Trained on CatalanQA, tested on XQuAD-ca. ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to aina@bsc.es ### Copyright Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Citation information If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. </details>
0c5602f267c31cfe66b853dc85425e42
hidude562/discordgpt2mini
hidude562
null
13
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,062
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-discordgpt2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 5.3032 - eval_runtime: 59.2004 - eval_samples_per_second: 274.542 - eval_steps_per_second: 34.324 - epoch: 0.26 - step: 25500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
9f6c0dfc8e1c4db52786ebe2b6214db7
calebcsjm/reversed_harrypotter_generation
calebcsjm
gpt2
12
2
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
917
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reversed_harrypotter_generation This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
5d313fa15940889631845d112bf8a917
hisaoka/bart-large-cnn_radiology-ai-imagingcancer-0.9
hisaoka
bart
11
1
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,051
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn_radiology-ai-imagingcancer-0.9 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
5c6effa0a7ccba383647f66a991feabf
Normchell/sd_v1-4_helluva-boss_stolas
Normchell
null
4
0
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'helluva-boss', 'stolas', 'hell']
false
true
true
409
false
Hey guys! Here is my new model based on SD v1.4 This model is based on pictures of Stolas from Helluva boss. Install this model an use hb_stls_nmchl in text prompt to get a picture of Stolas as a character and not as an owl) Example grid: (picture of hb_stls_nmchl character with neon eyes, cyberpunk style) ![](https://huggingface.co/Normchell/sd_v1-4_helluva-boss_stolas/blob/main/grid-0035.png) -------
84ac9375b529867e8907142d6f30ad70
model-attribution-challenge/gpt-neo-125M
model-attribution-challenge
gpt_neo
10
140
transformers
1
text-generation
true
false
true
apache-2.0
['en']
['The Pile']
null
0
0
0
0
0
0
0
['text generation', 'pytorch', 'causal-lm']
false
true
true
3,336
false
# GPT-Neo 125M ## Model Description GPT-Neo 125M is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. ## Training data GPT-Neo 125M was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. ## Training procedure This model was trained on the Pile for 300 billion tokens over 572,300 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-125M') >>> generator("EleutherAI has", do_sample=True, min_length=50) [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Eval results TBD ### Down-Stream Applications TBD ### BibTeX entry and citation info To cite this model, use ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } @article{gao2020pile, title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, journal={arXiv preprint arXiv:2101.00027}, year={2020} } ```
413ddb533e29dc372593495521eb4679
csarron/mobilebert-uncased-squad-v1
csarron
mobilebert
8
24
transformers
0
question-answering
true
false
false
mit
['en']
['squad']
null
0
0
0
0
0
0
0
['question-answering', 'mobilebert']
false
true
true
2,732
false
## MobileBERT fine-tuned on SQuAD v1 [MobileBERT](https://arxiv.org/abs/2004.02984) is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint `google/mobilebert-uncased` on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer). ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json export SQUAD_DIR=`pwd`/data python run_squad.py \ --model_type mobilebert \ --model_name_or_path google/mobilebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 16 \ --per_gpu_eval_batch_size 16 \ --learning_rate 4e-5 \ --num_train_epochs 5.0 \ --max_seq_length 320 \ --doc_stride 128 \ --warmup_steps 1400 \ --output_dir $SQUAD_DIR/mobilebert-uncased-warmup-squad_v1 2>&1 | tee train-mobilebert-warmup-squad_v1.log ``` It took about 3 hours to finish. ### Results **Model size**: `95M` | Metric | # Value | # Original ([Table 5](https://arxiv.org/pdf/2004.02984.pdf))| | ------ | --------- | --------- | | **EM** | **82.6** | **82.9** | | **F1** | **90.0** | **90.0** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/mobilebert-uncased-squad-v1", tokenizer="csarron/mobilebert-uncased-squad-v1" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.7754058241844177, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
12d923e6cb6ce7a0dc1e35db816487c9
sd-concepts-library/lxj-o4
sd-concepts-library
null
10
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,070
false
### lxj-o4 on Stable Diffusion This is the `<csp>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<csp> 0](https://huggingface.co/sd-concepts-library/lxj-o4/resolve/main/concept_images/3.jpeg) ![<csp> 1](https://huggingface.co/sd-concepts-library/lxj-o4/resolve/main/concept_images/0.jpeg) ![<csp> 2](https://huggingface.co/sd-concepts-library/lxj-o4/resolve/main/concept_images/1.jpeg) ![<csp> 3](https://huggingface.co/sd-concepts-library/lxj-o4/resolve/main/concept_images/2.jpeg) ![<csp> 4](https://huggingface.co/sd-concepts-library/lxj-o4/resolve/main/concept_images/4.jpeg)
6a0a863eb9fddb570403049536249fbf
sd-dreambooth-library/skshikakinotonoderugomi
sd-dreambooth-library
null
20
4
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
0
0
0
['text-to-image']
false
true
true
819
false
### sksHikakinotonoderugomi Dreambooth model trained by Hirokusa with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: sksHikakinotonoderugomi ![sksHikakinotonoderugomi 0](https://huggingface.co/sd-dreambooth-library/skshikakinotonoderugomi/resolve/main/sample_images/sksHikakinotonoderugomi_(1131251).png)
23fa8c226049daeff6c69fdb8250d975
yashas123/finetuning-sentiment-model
yashas123
distilbert
15
27
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,040
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.7491 - Accuracy: 0.8567 - F1: 0.8581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
3b0ca77c84424773f98bcf0d787b3f0f
Helsinki-NLP/opus-mt-eu-es
Helsinki-NLP
marian
11
84
transformers
1
translation
true
true
false
apache-2.0
['eu', 'es']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,010
false
### eus-spa * source group: Basque * target group: Spanish * OPUS readme: [eus-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eus-spa/README.md) * model: transformer-align * source language(s): eus * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eus-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eus.spa | 48.8 | 0.673 | ### System Info: - hf_name: eus-spa - source_languages: eus - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eus-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eu', 'es'] - src_constituents: {'eus'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eus-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eus-spa/opus-2020-06-17.test.txt - src_alpha3: eus - tgt_alpha3: spa - short_pair: eu-es - chrF2_score: 0.6729999999999999 - bleu: 48.8 - brevity_penalty: 0.9640000000000001 - ref_len: 12469.0 - src_name: Basque - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: eu - tgt_alpha2: es - prefer_old: False - long_pair: eus-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
28c8a044fe754ad2936957d432053ee9
PaddlePaddle/ernie-3.0-medium-zh
PaddlePaddle
ernie
7
0
paddlenlp
0
null
false
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
51,585
false
[![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/ernie-3.0-medium-zh ## Intro [ERNIE 3.0 Models](https://github.com/paddlepaddle/PaddleNLP/tree/develop/model_zoo/ernie-3.0) are lightweight models obtained from Wenxin large model ERNIE 3.0 using distillation technology. The model structure is consistent with ERNIE 2.0, and has a stronger Chinese effect than ERNIE 2.0. For a detailed explanation of related technologies, please refer to the article [_解析全球最大中文单体模型鹏城-百度·文心技术细节_](https://www.jiqizhixin.com/articles/2021-12-08-9) ## How to Use Click on the "Use in paddlenlp" on the top right corner! ## Performance ERNIE 3.0 open sources six models: **ERNIE 3.0 _XBase_**, **ERNIE 3.0 _Base_**, **ERNIE 3.0 _Medium_**, **ERNIE 3.0 _Mini_**, **ERNIE 3.0 _Micro_**, **ERNIE 3.0 _Nano_**: - **ERNIE 3.0-_XBase_** (_20-layer, 1024-hidden, 16-heads_) - **ERNIE 3.0-_Base_** (_12-layer, 768-hidden, 12-heads_) - **ERNIE 3.0-_Medium_** (_6-layer, 768-hidden, 12-heads_) - **ERNIE 3.0-_Mini_** (_6-layer, 384-hidden, 12-heads_) - **ERNIE 3.0-_Micro_** (_4-layer, 384-hidden, 12-heads_) - **ERNIE 3.0-_Nano_** (_4-layer, 312-hidden, 12-heads_) Below is the **precision-latency graph** of the small Chinese models in PaddleNLP. The abscissa represents the latency (unit: ms) tested on CLUE IFLYTEK dataset (maximum sequence length is set to 128), and the ordinate is the average accuracy on 10 CLUE tasks (including text classification, text matching, natural language inference, Pronoun disambiguation, machine reading comprehension and other tasks), among which the metric of CMRC2018 is Exact Match (EM), and the metric of other tasks is Accuracy. The closer the model to the top left in the figure, the higher the level of accuracy and performance.The top left model in the figure has the highest level of accuracy and performance. The number of parameters of the model are marked under the model name in the figure. For the test environment, see [Performance Test](https://github.com/paddlepaddle/PaddleNLP/tree/develop/model_zoo/ernie-3.0#%E6%80%A7%E8%83%BD%E6%B5%8B%E8%AF%95) in details. precision-latency graph under CPU (number of threads: 1 and 8), batch_size = 32: <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852121-2798b5c9-d122-4ac0-b4c8-da46b89b5512.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852129-bbe58835-8eec-45d5-a4a9-cc2cf9a3db6a.png"></a></td> </tr> </table> precision-latency graph under CPU (number of threads: 1 and 8), batch_size = 1: <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852106-658e18e7-705b-4f53-bad0-027281163ae3.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852112-4b89d675-7c95-4d75-84b6-db5a6ea95e2c.png"></a></td> </tr> </table> precision-latency graph under GPU, batch_size = 32, 1: <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175854679-3247f42e-8716-4a36-b5c6-9ce4661b36c7.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175854670-57878b34-c213-47ac-b620-aaaec082f435.png"></a></td> </tr> </table> As can be seen from the figure, the comprehensive performance of the ERNIE Tiny 3.0 models has been comprehensively ahead of UER-py, Huawei-Noah and HFL in terms of accuracy and performance. And when batch_size=1 and the precision mode is FP16, the inference performance of the wide and shallow model on the GPU is more advantageous. The precision data on the CLUE **validation set** are shown in the following table: <table style="width:100%;" cellpadding="2" cellspacing="0" border="1" bordercolor="#000000"> <tbody> <tr> <td style="text-align:center;vertical-align:middle"> <span style="font-size:18px;">Arch</span> </td> <td style="text-align:center"> <span style="font-size:18px;">Model</span> </td> <td style="text-align:center"> <span style="font-size:18px;">AVG</span> </td> <td style="text-align:center"> <span style="font-size:18px;">AFQMC</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">TNEWS</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">IFLYTEK</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CMNLI</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">OCNLI</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CLUEWSC2020</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CSL</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CMRC2018</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CHID</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">C<sup>3</sup></span> </td> </tr> <tr> <td rowspan=3 align=center> 24L1024H </td> <td style="text-align:center"> <span style="font-size:18px">ERNIE 1.0-Large-cw</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>79.03</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">75.97</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.65</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>85.09</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>81.73</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>93.09</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.53</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>74.22/91.88</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>88.57</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.54</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE 2.0-Large-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.90</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.23</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>59.33</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">61.91</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.85</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">89.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.23</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.95/90.31</span> </td> <td style="text-align:center"> <span style="font-size:18px">86.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.12</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">RoBERTa-wwm-ext-large</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.61</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.00</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.02</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.88</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.81</span> </td> <td style="text-align:center"> <span style="font-size:18px">90.79</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.67</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.58/89.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">85.72</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.26</span> </td> </tr> <tr> <td rowspan=1 align=center> 20L1024H </td> <td style="text-align:center"> <span style="font-size:18px"><b>ERNIE 3.0-Xbase-zh</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>78.39</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.16</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>59.55</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>61.87</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.40</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>81.73</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>88.82</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>83.60</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>75.99/93.00</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>86.78</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.98</b></span> </td> </tr> <tr> <td rowspan=9 align=center> 12L768H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_base_zh.pdparams"> ERNIE 3.0-Base-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px">76.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.26</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.56</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.02</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>80.10</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">86.18</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.71/90.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">84.26</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>77.88</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE 1.0-Base-zh-cw</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.47</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.07</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">57.86</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.91</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>83.41</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">79.58</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>89.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>83.42</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>72.88/90.78</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.68</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">76.98</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE-Gram-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.72</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.28</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.88</span> </td> <td style="text-align:center"> <span style="font-size:18px">60.87</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">88.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.82/90.38</span> </td> <td style="text-align:center"> <span style="font-size:18px">84.04</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.69</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">Langboat/Mengzi-BERT-Base</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.69</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.35</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.76</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">88.16</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.20</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.04/88.35</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.74</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.70</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE 2.0-Base-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.32</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.65</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.25</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.62</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.91</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">66.08/87.46</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.19</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE 1.0-Base-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.17</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.84</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>58.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.25</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">81.68</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.58</span> </td> <td style="text-align:center"> <span style="font-size:18px">85.20</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.32/87.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.47</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.68</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">RoBERTa-wwm-ext</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.60</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.23</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.92</span> </td> <td style="text-align:center"> <span style="font-size:18px">88.49</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.39/88.50</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.43</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.03</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">BERT-Base-Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.57</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.29</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.97</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.22</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.91</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.30/86.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.01</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.38</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Base</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.89</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.62</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.14</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.01</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.56</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.58</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.80</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.87/84.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.52</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.76</span> </td> </tr> <tr> <td rowspan=1 align=center> 8L512H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Medium</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.06</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.10</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.29</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.35</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.09</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.63/78.91</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.84</span> </td> </tr> <tr> <td rowspan=5 align=center> 6L768H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams"> ERNIE 3.0-Medium-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>72.49</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>73.37</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>57.00</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">60.67</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>80.64</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.88</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>79.28</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>81.60</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>65.83/87.30</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>79.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>69.73</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">HLF/RBT6, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.06</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.45</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.36</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.32</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.67</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.72/84.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.17</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.85</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">TinyBERT<sub>6</sub>, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.62</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.22</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.70</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.48</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.12</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.17</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.03/83.75</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.11</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">RoFormerV2 Small</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.52</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.47</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.53</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>60.72</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">76.37</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.00</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.97/83.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.66</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.41</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-L6-H768</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.09</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.54</span> </td> <td style="text-align:center"> <span style="font-size:18px">60.48</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.49</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.00</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.04</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.74/75.52</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.73</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.40</span> </td> </tr> <tr> <td rowspan=1 align=center> 6L384H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_mini_zh.pdparams"> ERNIE 3.0-Mini-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px">66.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.85</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.24</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.48</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.19</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.30</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.53/81.97</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.60</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L768H </td> <td style="text-align:center"> <span style="font-size:18px">HFL/RBT4, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.42</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.50</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.34</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.23</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.30/81.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.18</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.45</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L512H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Small</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.25</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.21</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.552</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.80</span> </td> <td style="text-align:center"> <span style="font-size:18px">66.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">46.75/69.69</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.59</span> </td> <td style="text-align:center"> <span style="font-size:18px">50.92</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L384H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_micro_zh.pdparams"> ERNIE 3.0-Micro-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px">64.21</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.15</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.81</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.50</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.77/77.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.26</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.53</span> </td> </tr> <tr> <td rowspan=2 align=center> 4L312H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh.pdparams"> ERNIE 3.0-Nano-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.97</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>70.51</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>54.57</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>48.36</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>74.97</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>70.61</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">68.75</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>75.93</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>52.00/76.35</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>58.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>55.11</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">TinyBERT<sub>4</sub>, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">60.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.02</span> </td> <td style="text-align:center"> <span style="font-size:18px">39.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.94</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.59</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>70.07</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">75.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">46.04/69.34</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">52.18</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L256H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Mini</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.40</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.32</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.22</span> </td> <td style="text-align:center"> <span style="font-size:18px">41.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.40</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.36</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">5.96/17.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">51.19</span> </td> <td style="text-align:center"> <span style="font-size:18px">39.68</span> </td> </tr> <tr> <td rowspan=1 align=center> 3L1024H </td> <td style="text-align:center"> <span style="font-size:18px">HFL/RBTL3, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">66.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.14</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.56</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.29</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.74</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.50/80.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.03</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.56</span> </td> </tr> <tr> <td rowspan=1 align=center> 3L768H </td> <td style="text-align:center"> <span style="font-size:18px">HFL/RBT3, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.72</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.18</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.20</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.73/78.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.26</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.93</span> </td> </tr> <tr> <td rowspan=1 align=center> 2L128H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Tiny</span> </td> <td style="text-align:center"> <span style="font-size:18px">44.45</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.02</span> </td> <td style="text-align:center"> <span style="font-size:18px">51.47</span> </td> <td style="text-align:center"> <span style="font-size:18px">20.28</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.73</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.43</span> </td> <td style="text-align:center"> <span style="font-size:18px">3.08/14.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">23.57</span> </td> <td style="text-align:center"> <span style="font-size:18px">28.12</span> </td> </tr> <tbody> </table> <br /> ## Citation Info ```text @article{sun2021ernie, title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation}, author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others}, journal={arXiv preprint arXiv:2107.02137}, year={2021} } @article{su2021ernie, title={Ernie-tiny: A progressive distillation framework for pretrained transformer compression}, author={Su, Weiyue and Chen, Xuyi and Feng, Shikun and Liu, Jiaxiang and Liu, Weixin and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2106.02241}, year={2021} } @article{wang2021ernie, title={Ernie 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation}, author={Wang, Shuohuan and Sun, Yu and Xiang, Yang and Wu, Zhihua and Ding, Siyu and Gong, Weibao and Feng, Shikun and Shang, Junyuan and Zhao, Yanbin and Pang, Chao and others}, journal={arXiv preprint arXiv:2112.12731}, year={2021} } ```
e1e117510252a705b270b344aa8cc4cd
vikaskapur/sentimental
vikaskapur
bert
8
1
transformers
1
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
752
false
# Model Details * The SENTIMENTAL classifier trained to predict the likelihood that a comment will be perceived as positive or negative. * BERT based Text Classification. # Intended Use * Intended to be used for a wide range of use cases such as supporting human moderation and extracting polarity of review comments. * Not intended for fully automated moderation. * Not intended to make judgments about specific individuals. # Factors * Identity terms referencing frequently positive and negative emotions. # Metrics • Accuracy, which measures the percentage of True Positive and True Negative. # Ethical Considerations * TODO # Quantitative Analyses * TODO # Training Data * TODO # Evaluation Data * TODO # Caveats and Recommendations * TODO
e256f507f3ff841838549955536966b7
theojolliffe/bart-large-cnn-finetuned-roundup-2-4
theojolliffe
bart
13
3
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,775
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-roundup-2-4 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0908 - Rouge1: 51.9961 - Rouge2: 32.3963 - Rougel: 32.1774 - Rougelsum: 50.1033 - Gen Len: 141.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 167 | 1.2152 | 52.234 | 33.1104 | 33.308 | 49.5516 | 142.0 | | No log | 2.0 | 334 | 1.1054 | 52.7096 | 33.4698 | 33.9595 | 49.8736 | 140.3333 | | 1.0437 | 3.0 | 501 | 1.0796 | 51.699 | 32.4255 | 34.0294 | 49.5276 | 141.7143 | | 1.0437 | 4.0 | 668 | 1.0908 | 51.9961 | 32.3963 | 32.1774 | 50.1033 | 141.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
b2721623733545b47810bfe1c3b552b5
m3hrdadfi/wav2vec2-large-xlsr-estonian
m3hrdadfi
wav2vec2
18
9
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['et']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
8,750
false
# Wav2Vec2-Large-XLSR-53-Estonian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: **Requirements** ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 10).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: õhulossid lagunevad ning ees ootab maapind predicted: õhulassid lagunevad ning ees ootab maapind --- reference: milliseks kiievisse pääsemise nimel võistlev muusik soome muusikamaastiku hetkeseisu hindab ning kas ta ka ennast sellel tulevikus tegutsemas näeb kuuled videost predicted: milliseks gievisse pääsemise nimel võitlev muusiks soome muusikama aastiku hetke seisu hindab ning kas ta ennast selle tulevikus tegutsemast näeb kuulad videost --- reference: näiteks kui pool seina on tehtud tekib tunne et tahaks tegelikult natuke teistsugust ja hakkame otsast peale predicted: näiteks kui pool seine on tehtud tekib tunnetahaks tegelikult matuka teistsugust jahappanna otsast peane --- reference: neuroesteetilised katsed näitavad et just nägude vaatlemine aktiveerib inimese aju esteetilist keskust predicted: neuroaisteetiliselt katsed näitaval et just nägude vaatlemine aptiveerid inimese aju est eedilist keskust --- reference: paljud inimesed kindlasti kadestavad teid kuid ei julge samamoodi vabalt võtta predicted: paljud inimesed kindlasti kadestavadteid kuid ei julge sama moodi vabalt võtta --- reference: parem on otsida pileteid inkognito veebi kaudu predicted: parem on otsida pileteid ning kognitu veebikaudu --- reference: ja vot siin ma jäin vaikseks predicted: ja vat siisma ja invaikseks --- reference: mida sa iseendale juubeli puhul soovid predicted: mida saise endale jubeli puhul soovid --- reference: kuumuse ja kõrge temperatuuri tõttu kuivas tühjadel karjamaadel rohi mis muutus kergesti süttivaks predicted: kuumuse ja kõrge temperatuuri tõttu kuivast ühjadal karjamaadel rohi mis muutus kergesti süttivaks --- reference: ilmselt on inimesi kelle jaoks on see hea lahendus predicted: ilmselt on inimesi kelle jaoks on see hea lahendus --- ``` ## Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result**: - WER: 33.93% ## Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_estonian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Estonian--Vmlldzo1NjA1MTI?accessToken=k2b2g3a2i12m1sdwf13q8b226pplmmyw12joxo6vk38eb4djellfzmn9fp2725fw) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Estonian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
3b917f8513c5fd0113eb3cf143851a92
Frikallo/DeepDunk
Frikallo
gpt2
16
2
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
910
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeepDunk This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 1360794382 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
4dec6d477f857b59adbf3e3b61c1490c
lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class
lmvasque
bert
9
3
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
6,067
false
## Readability benchmark (ES): mbert-es-paragraphs-3class This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish". You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). ## Models Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets. You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link). These are the available models you can use (current model page in bold): | Model | Granularity | # classes | |-----------------------------------------------------------------------------------------------------------|----------------|:---------:| | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 | | **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class)** | **paragraphs** | **3** | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 | For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training. ## Results These are our results for all the readability models in different settings. Please select your model based on the desired performance: | Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) | |-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:| | Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 | | Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 | | Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 | | Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 | | Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 | | Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 | | Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** | | Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 | | Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 | | Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** | | Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 | | Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 | | Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 | | Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 | ## Citation If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published) ``` @inproceedings{vasquez-rodriguez-etal-2022-benchmarking, title = "A Benchmark for Neural Readability Assessment of Texts in Spanish", author = "V{\'a}squez-Rodr{\'\i}guez, Laura and Cuenca-Jim{\'\e}nez, Pedro-Manuel and Morales-Esquivel, Sergio Esteban and Alva-Manchego, Fernando", booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022", month = dec, year = "2022", } ```
b3977ae6c3c30d1dca8083e4af609880
BasStein/doe2vec-d5-m8-ls24-VAE-kl0.001
BasStein
null
6
0
keras
0
null
false
false
false
apache-2.0
['en']
['BasStein/250000-randomfunctions-5d']
{'emissions': 0.0363, 'source': 'code carbon', 'training_type': 'pre-training', 'geographical_location': 'Leiden, The Netherlands', 'hardware_used': '1 Tesla T4'}
0
0
0
0
0
0
0
['doe2vec', 'exploratory-landscape-analysis', 'autoencoders']
false
true
true
1,231
false
## Model description DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector. For different input dimensions or sample size you require a different model. Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight} Example code of loading this huggingface model using the doe2vec package. First install the package ```zsh pip install doe2vec ``` Then import and load the model. ```python from doe2vec import doe_model obj = doe_model( 5, 8, latent_dim=24, kl_weight=0.001, model_type="VAE" ) obj.load_from_huggingface() #test the model obj.plot_label_clusters_bbob() ``` ## Intended uses & limitations The model is intended to be used to generate feature representations for optimization function landscapes. The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning. ## Training procedure The model is trained using a weighed KL loss and mean squared error reconstruction loss. The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs. - **Hardware:** 1x Tesla T4 GPU - **Optimizer:** Adam
b0d888b7867cd6c059726648729c5226
vicgalle/xlm-roberta-large-xnli-anli
vicgalle
xlm-roberta
7
12,757
transformers
10
zero-shot-classification
true
false
false
mit
['multilingual']
['mnli', 'xnli', 'anli']
null
0
0
0
0
0
0
0
['zero-shot-classification', 'nli', 'pytorch']
false
true
true
1,090
false
### XLM-RoBERTa-large-XNLI-ANLI XLM-RoBERTa-large model finetunned over several NLI datasets, ready to use for zero-shot classification. Here are the accuracies for several test datasets: | | XNLI-es | XNLI-fr | ANLI-R1 | ANLI-R2 | ANLI-R3 | |-----------------------------|---------|---------|---------|---------|---------| | xlm-roberta-large-xnli-anli | 93.7% | 93.2% | 68.5% | 53.6% | 49.0% | The model can be loaded with the zero-shot-classification pipeline like so: ``` from transformers import pipeline classifier = pipeline("zero-shot-classification", model="vicgalle/xlm-roberta-large-xnli-anli") ``` You can then use this pipeline to classify sequences into any of the class names you specify: ``` sequence_to_classify = "Algún día iré a ver el mundo" candidate_labels = ['viaje', 'cocina', 'danza'] classifier(sequence_to_classify, candidate_labels) #{'sequence': 'Algún día iré a ver el mundo', #'labels': ['viaje', 'danza', 'cocina'], #'scores': [0.9991760849952698, 0.0004178212257102132, 0.0004059972707182169]} ```
00652a195fe0913374694f35888fbec5
jonatasgrosman/exp_w2v2t_fr_no-pretraining_s208
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
414
false
# exp_w2v2t_fr_no-pretraining_s208 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b5146b5349b15a960a561d4c8fcc958c
mycringefactory/spamtontalk_gpt_neo_xl_v10
mycringefactory
gpt_neo
8
2
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,060
false
#### Example ##### gen.py ``` from transformers import GPTNeoForCausalLM, AutoTokenizer import torch import sys model_name = sys.argv[1] model = GPTNeoForCausalLM.from_pretrained(model_name).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_name) def generate(model, text, temperature=0.9, min_length=256, max_length=256, no_grad=True, use_cache=False, do_sample=True, match_mesh_tf=False, **kwargs): ids = tokenizer(text, return_tensors="pt").input_ids.to("cuda") if no_grad: with torch.no_grad(): gen_tokens = model.generate( ids, do_sample=do_sample, min_length=min_length, max_length=max_length, temperature=temperature, use_cache=use_cache, **kwargs ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` ``` python gen.py spamtontalk_gpt_neo_xl_v9 >>> text = """Talk (anything): Example dialogue""" >>> generate(model, text, temperature=0.92) ```
a1dff9e64259dbce2009b4f73b24a2e3
JoanTirant/bert-finetuned-ner
JoanTirant
bert
12
9
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,518
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0679 - Precision: 0.9364 - Recall: 0.9488 - F1: 0.9426 - Accuracy: 0.9855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0884 | 1.0 | 1756 | 0.0662 | 0.9083 | 0.9317 | 0.9198 | 0.9824 | | 0.04 | 2.0 | 3512 | 0.0613 | 0.9341 | 0.9493 | 0.9417 | 0.9856 | | 0.0187 | 3.0 | 5268 | 0.0679 | 0.9364 | 0.9488 | 0.9426 | 0.9855 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
0948ffab7beab86ca9855866dfdd85f9
realmadrid1016/beit-base-patch16-224-finetuned-eurosat
realmadrid1016
beit
14
8
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,468
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # beit-base-patch16-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0067 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4792 | 0.95 | 15 | 0.0402 | 0.985 | | 0.0481 | 1.95 | 30 | 0.0067 | 1.0 | | 0.0561 | 2.95 | 45 | 0.0086 | 0.995 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
1dae5c95d321e0047075d613a36ad5c5
Helsinki-NLP/opus-mt-fr-mfe
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-fr-mfe * source languages: fr * target languages: mfe * OPUS readme: [fr-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mfe/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.mfe | 26.1 | 0.451 |
3bf0aec50a01b4ec50d0c4d0d724dd35
ssoll/NuMergeMix
ssoll
null
15
0
null
7
null
false
false
false
openrail
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,569
false
d41d8cd98f00b204e9800998ecf8427e
sd-concepts-library/green-tent
sd-concepts-library
null
11
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,246
false
### green-tent on Stable Diffusion This is the `<green-tent>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<green-tent> 0](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/1.jpeg) ![<green-tent> 1](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/5.jpeg) ![<green-tent> 2](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/0.jpeg) ![<green-tent> 3](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/4.jpeg) ![<green-tent> 4](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/2.jpeg) ![<green-tent> 5](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/3.jpeg)
80c8a91f7a6758067c06a1aa98212e34
Helsinki-NLP/opus-mt-de-fi
Helsinki-NLP
marian
10
3,134
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
false
### opus-mt-de-fi * source languages: de * target languages: fi * OPUS readme: [de-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.fi | 40.0 | 0.628 |
03805ce11a279038165fb123eb655538
PdF/xlm-roberta-base-finetuned-panx-de
PdF
xlm-roberta
11
5
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,313
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1348 - F1: 0.8658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.254 | 1.0 | 525 | 0.1647 | 0.8200 | | 0.1285 | 2.0 | 1050 | 0.1454 | 0.8443 | | 0.0808 | 3.0 | 1575 | 0.1348 | 0.8658 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 2.1.0 - Tokenizers 0.10.3
a811c538680b67fa2b88536b80c9d783
lijingxin/xlm-roberta-base-finetuned-panx-de-fr
lijingxin
xlm-roberta
10
11
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,314
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1664 - F1: 0.8556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2846 | 1.0 | 715 | 0.1837 | 0.8247 | | 0.1446 | 2.0 | 1430 | 0.1617 | 0.8409 | | 0.0923 | 3.0 | 2145 | 0.1664 | 0.8556 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
c09fa56df48553beb8020e26f29c7c40
google/electra-base-discriminator
google
electra
10
493,520
transformers
8
null
true
true
true
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,095
false
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)). ## How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ```
c082f04486c91700f68550e8575d2acd
MultiversexPeeps/JemandtheHolograms
MultiversexPeeps
null
21
8
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['text-to-image']
false
true
true
1,272
false
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/MultiversexPeeps/JemandtheHolograms) ### Jem and the Holograms Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk duskgem (use that on your prompt)
0fa5958f7e8ef0168547efe7c5b21501
google/tapas-base-finetuned-tabfact
google
tapas
8
155
transformers
0
text-classification
true
true
false
apache-2.0
['en']
['tab_fact']
null
0
0
0
0
0
0
0
['tapas', 'sequence-classification']
false
true
true
4,764
false
# TAPAS base model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_base` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact. ## Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
b0c4bd6cb103b5becd1c39dc326d8521
tomekkorbak/practical_panini
tomekkorbak
gpt2
36
0
transformers
0
null
true
false
false
mit
['en']
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
7,701
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # practical_panini This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 4096}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'gpt3_kwargs': {'model_name': 'davinci'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'practical_panini', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1x2al511
d6100adcf928ccbb2fed27e7d7796a82
GW12/wav2vec2-libri-train360-colab
GW12
wav2vec2
15
13
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
14,136
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-libri-train360-colab This model is a fine-tuned version of [GW12/wav2vec2-libri-train100-colab](https://huggingface.co/GW12/wav2vec2-libri-train100-colab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1101 - Wer: 0.1002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:------:|:---------------:|:------:| | 3.1196 | 0.02 | 500 | 0.2020 | 0.1494 | | 0.1695 | 0.04 | 1000 | 0.1600 | 0.1462 | | 0.1726 | 0.06 | 1500 | 0.1996 | 0.1457 | | 0.1654 | 0.08 | 2000 | 0.1531 | 0.1448 | | 0.1665 | 0.1 | 2500 | 0.1582 | 0.1491 | | 0.1555 | 0.12 | 3000 | 0.1566 | 0.1478 | | 0.1562 | 0.13 | 3500 | 0.1555 | 0.1501 | | 0.1604 | 0.15 | 4000 | 0.1465 | 0.1422 | | 0.1522 | 0.17 | 4500 | 0.1423 | 0.1452 | | 0.1534 | 0.19 | 5000 | 0.1375 | 0.1431 | | 0.1576 | 0.21 | 5500 | 0.1872 | 0.1421 | | 0.1543 | 0.23 | 6000 | 0.1547 | 0.1381 | | 0.1501 | 0.25 | 6500 | 0.1446 | 0.1381 | | 0.1508 | 0.27 | 7000 | 0.2108 | 0.1507 | | 0.1479 | 0.29 | 7500 | 0.1495 | 0.1364 | | 0.1474 | 0.31 | 8000 | 0.1571 | 0.1406 | | 0.1475 | 0.33 | 8500 | 0.1570 | 0.1390 | | 0.1453 | 0.35 | 9000 | 0.1547 | 0.1377 | | 0.1465 | 0.37 | 9500 | 0.1633 | 0.1336 | | 0.1424 | 0.38 | 10000 | 0.1344 | 0.1358 | | 0.1417 | 0.4 | 10500 | 0.2518 | 0.1515 | | 0.1427 | 0.42 | 11000 | 0.1697 | 0.1409 | | 0.1434 | 0.44 | 11500 | 0.1649 | 0.1373 | | 0.1384 | 0.46 | 12000 | 0.1743 | 0.1403 | | 0.1394 | 0.48 | 12500 | 0.1485 | 0.1407 | | 0.1392 | 0.5 | 13000 | 0.1421 | 0.1352 | | 2.3614 | 0.52 | 13500 | 0.9494 | 0.1673 | | 0.1621 | 0.54 | 14000 | 0.4273 | 0.1539 | | 0.1454 | 0.56 | 14500 | 0.1764 | 0.1399 | | 0.1453 | 0.58 | 15000 | 0.1750 | 0.1414 | | 0.1375 | 0.6 | 15500 | 0.1845 | 0.1410 | | 0.1436 | 0.62 | 16000 | 0.1583 | 0.1413 | | 0.1405 | 0.63 | 16500 | 0.1893 | 0.1413 | | 0.139 | 0.65 | 17000 | 0.2281 | 0.1619 | | 0.1374 | 0.67 | 17500 | 0.1863 | 0.1413 | | 0.1386 | 0.69 | 18000 | 0.2301 | 0.1479 | | 0.1435 | 0.71 | 18500 | 0.2349 | 0.1579 | | 0.1293 | 0.73 | 19000 | 0.1878 | 0.1461 | | 0.1311 | 0.75 | 19500 | 0.2092 | 0.1342 | | 0.1357 | 0.77 | 20000 | 0.1788 | 0.1421 | | 0.1258 | 0.79 | 20500 | 0.1336 | 0.1302 | | 0.1284 | 0.81 | 21000 | 0.1459 | 0.1306 | | 0.1452 | 0.83 | 21500 | 0.1316 | 0.1319 | | 0.1241 | 0.85 | 22000 | 0.1497 | 0.1285 | | 0.1292 | 0.87 | 22500 | 0.1417 | 0.1318 | | 0.1255 | 0.88 | 23000 | 0.1262 | 0.1305 | | 0.1239 | 0.9 | 23500 | 0.1417 | 0.1302 | | 0.1237 | 0.92 | 24000 | 0.1704 | 0.1309 | | 0.1231 | 0.94 | 24500 | 0.1466 | 0.1308 | | 0.1303 | 0.96 | 25000 | 0.2085 | 0.1392 | | 0.1252 | 0.98 | 25500 | 0.1514 | 0.1441 | | 0.1244 | 1.0 | 26000 | 0.1353 | 0.1282 | | 0.1034 | 1.02 | 26500 | 0.1306 | 0.1279 | | 0.1035 | 1.04 | 27000 | 0.1785 | 0.1288 | | 0.1063 | 1.06 | 27500 | 0.1742 | 0.1311 | | 0.1065 | 1.08 | 28000 | 0.1505 | 0.1269 | | 0.1093 | 1.1 | 28500 | 0.1394 | 0.1264 | | 0.1115 | 1.12 | 29000 | 0.1490 | 0.1325 | | 0.1044 | 1.13 | 29500 | 0.5477 | 0.1736 | | 0.1003 | 1.15 | 30000 | 0.2347 | 0.1351 | | 0.1049 | 1.17 | 30500 | 0.2001 | 0.1347 | | 0.1068 | 1.19 | 31000 | 0.1528 | 0.1255 | | 0.1069 | 1.21 | 31500 | 0.1528 | 0.1266 | | 0.1042 | 1.23 | 32000 | 0.2272 | 0.1318 | | 0.1073 | 1.25 | 32500 | 0.5753 | 0.1869 | | 0.1021 | 1.27 | 33000 | 0.3459 | 0.1477 | | 0.1023 | 1.29 | 33500 | 0.2412 | 0.1362 | | 0.0988 | 1.31 | 34000 | 0.2124 | 0.1319 | | 0.1047 | 1.33 | 34500 | 0.3733 | 0.1497 | | 0.1078 | 1.35 | 35000 | 0.1553 | 0.1281 | | 0.0988 | 1.37 | 35500 | 0.1364 | 0.1239 | | 0.0957 | 1.38 | 36000 | 0.1484 | 0.1278 | | 0.1038 | 1.4 | 36500 | 0.1723 | 0.1253 | | 0.1001 | 1.42 | 37000 | 0.3668 | 0.1648 | | 0.101 | 1.44 | 37500 | 0.2136 | 0.1339 | | 0.1022 | 1.46 | 38000 | 0.1140 | 0.1162 | | 0.0989 | 1.48 | 38500 | 0.1628 | 0.1265 | | 0.0982 | 1.5 | 39000 | 0.2204 | 0.1376 | | 0.1012 | 1.52 | 39500 | 0.1716 | 0.1297 | | 0.1067 | 1.54 | 40000 | 0.1362 | 0.1234 | | 0.1022 | 1.56 | 40500 | 0.1170 | 0.1178 | | 0.1011 | 1.58 | 41000 | 0.1578 | 0.1240 | | 0.0845 | 1.6 | 41500 | 0.1659 | 0.1243 | | 0.0929 | 1.62 | 42000 | 0.1813 | 0.1310 | | 0.0904 | 1.63 | 42500 | 0.1309 | 0.1215 | | 0.0885 | 1.65 | 43000 | 0.1964 | 0.1359 | | 0.0895 | 1.67 | 43500 | 0.1309 | 0.1179 | | 0.0855 | 1.69 | 44000 | 0.1472 | 0.1258 | | 0.0876 | 1.71 | 44500 | 0.1189 | 0.1190 | | 0.0925 | 1.73 | 45000 | 0.1477 | 0.1209 | | 0.0866 | 1.75 | 45500 | 0.2537 | 0.1428 | | 0.0938 | 1.77 | 46000 | 0.1406 | 0.1240 | | 0.0901 | 1.79 | 46500 | 0.1416 | 0.1201 | | 0.0839 | 1.81 | 47000 | 0.1323 | 0.1201 | | 0.0866 | 1.83 | 47500 | 0.1176 | 0.1149 | | 0.0876 | 1.85 | 48000 | 0.1141 | 0.1139 | | 0.0857 | 1.87 | 48500 | 0.2148 | 0.1297 | | 0.089 | 1.88 | 49000 | 0.1707 | 0.1231 | | 0.0861 | 1.9 | 49500 | 0.1457 | 0.1183 | | 0.0855 | 1.92 | 50000 | 0.4576 | 0.1654 | | 0.0808 | 1.94 | 50500 | 0.2264 | 0.1285 | | 0.0859 | 1.96 | 51000 | 0.1630 | 0.1201 | | 0.0859 | 1.98 | 51500 | 0.1613 | 0.1165 | | 0.086 | 2.0 | 52000 | 0.1529 | 0.1196 | | 0.0769 | 2.02 | 52500 | 0.1258 | 0.1139 | | 0.0783 | 2.04 | 53000 | 0.1105 | 0.1136 | | 0.0775 | 2.06 | 53500 | 0.1177 | 0.1128 | | 0.08 | 2.08 | 54000 | 0.1328 | 0.1156 | | 0.0765 | 2.1 | 54500 | 0.1229 | 0.1137 | | 0.0791 | 2.12 | 55000 | 0.1218 | 0.1121 | | 0.0831 | 2.13 | 55500 | 0.1106 | 0.1135 | | 0.0769 | 2.15 | 56000 | 0.1466 | 0.1166 | | 0.0761 | 2.17 | 56500 | 0.1177 | 0.1126 | | 0.0779 | 2.19 | 57000 | 0.1249 | 0.1120 | | 0.0749 | 2.21 | 57500 | 0.1258 | 0.1130 | | 0.0746 | 2.23 | 58000 | 0.1268 | 0.1122 | | 0.074 | 2.25 | 58500 | 0.1141 | 0.1153 | | 0.0726 | 2.27 | 59000 | 0.1231 | 0.1107 | | 0.0771 | 2.29 | 59500 | 0.1393 | 0.1125 | | 0.0776 | 2.31 | 60000 | 0.1224 | 0.1115 | | 0.0756 | 2.33 | 60500 | 0.1071 | 0.1085 | | 0.0753 | 2.35 | 61000 | 0.1072 | 0.1089 | | 0.0698 | 2.37 | 61500 | 0.1129 | 0.1094 | | 0.0726 | 2.38 | 62000 | 0.1109 | 0.1106 | | 0.0758 | 2.4 | 62500 | 0.1052 | 0.1103 | | 0.0743 | 2.42 | 63000 | 0.1079 | 0.1106 | | 0.0765 | 2.44 | 63500 | 0.1248 | 0.1108 | | 0.0724 | 2.46 | 64000 | 0.1248 | 0.1076 | | 0.0659 | 2.48 | 64500 | 0.1099 | 0.1088 | | 0.0674 | 2.5 | 65000 | 0.1156 | 0.1098 | | 0.0691 | 2.52 | 65500 | 0.1122 | 0.1093 | | 0.0677 | 2.54 | 66000 | 0.1228 | 0.1082 | | 0.0695 | 2.56 | 66500 | 0.1049 | 0.1066 | | 0.0687 | 2.58 | 67000 | 0.1025 | 0.1062 | | 0.0682 | 2.6 | 67500 | 0.1080 | 0.1064 | | 0.0663 | 2.61 | 68000 | 0.1009 | 0.1058 | | 0.0654 | 2.63 | 68500 | 0.1145 | 0.1071 | | 0.0641 | 2.65 | 69000 | 0.1178 | 0.1082 | | 0.0662 | 2.67 | 69500 | 0.1106 | 0.1084 | | 0.0623 | 2.69 | 70000 | 0.1086 | 0.1057 | | 0.0692 | 2.71 | 70500 | 0.1048 | 0.1071 | | 0.0663 | 2.73 | 71000 | 0.1119 | 0.1069 | | 0.0639 | 2.75 | 71500 | 0.1147 | 0.1062 | | 0.0597 | 2.77 | 72000 | 0.1121 | 0.1072 | | 0.0688 | 2.79 | 72500 | 0.1149 | 0.1060 | | 0.0616 | 2.81 | 73000 | 0.1126 | 0.1069 | | 0.0633 | 2.83 | 73500 | 0.1302 | 0.1074 | | 0.0651 | 2.85 | 74000 | 0.1260 | 0.1066 | | 0.0637 | 2.86 | 74500 | 0.1233 | 0.1075 | | 0.0641 | 2.88 | 75000 | 0.1199 | 0.1066 | | 0.0655 | 2.9 | 75500 | 0.1249 | 0.1075 | | 0.065 | 2.92 | 76000 | 0.1192 | 0.1061 | | 0.0626 | 2.94 | 76500 | 0.1267 | 0.1069 | | 0.0622 | 2.96 | 77000 | 0.1289 | 0.1094 | | 0.0608 | 2.98 | 77500 | 0.1502 | 0.1096 | | 0.0631 | 3.0 | 78000 | 0.1493 | 0.1099 | | 0.0535 | 3.02 | 78500 | 0.1220 | 0.1064 | | 0.0582 | 3.04 | 79000 | 0.1274 | 0.1077 | | 0.052 | 3.06 | 79500 | 0.1296 | 0.1072 | | 0.0562 | 3.08 | 80000 | 0.1160 | 0.1050 | | 0.0533 | 3.1 | 80500 | 0.1066 | 0.1031 | | 0.0564 | 3.11 | 81000 | 0.1300 | 0.1078 | | 0.0589 | 3.13 | 81500 | 0.1167 | 0.1056 | | 0.0582 | 3.15 | 82000 | 0.1129 | 0.1025 | | 0.0594 | 3.17 | 82500 | 0.1255 | 0.1054 | | 0.0559 | 3.19 | 83000 | 0.1258 | 0.1045 | | 0.0535 | 3.21 | 83500 | 0.1150 | 0.1029 | | 0.0538 | 3.23 | 84000 | 0.1043 | 0.1017 | | 0.0537 | 3.25 | 84500 | 0.1073 | 0.1028 | | 0.0534 | 3.27 | 85000 | 0.1011 | 0.1011 | | 0.0527 | 3.29 | 85500 | 0.0987 | 0.1010 | | 0.0549 | 3.31 | 86000 | 0.1008 | 0.1015 | | 0.0516 | 3.33 | 86500 | 0.1031 | 0.1017 | | 0.0549 | 3.35 | 87000 | 0.1103 | 0.1028 | | 0.056 | 3.36 | 87500 | 0.0980 | 0.1008 | | 0.0528 | 3.38 | 88000 | 0.1045 | 0.1020 | | 0.0555 | 3.4 | 88500 | 0.0979 | 0.1005 | | 0.0517 | 3.42 | 89000 | 0.0948 | 0.0992 | | 0.0495 | 3.44 | 89500 | 0.0974 | 0.1002 | | 0.0496 | 3.46 | 90000 | 0.1035 | 0.1013 | | 0.0497 | 3.48 | 90500 | 0.1167 | 0.1035 | | 0.0485 | 3.5 | 91000 | 0.1098 | 0.1009 | | 0.0465 | 3.52 | 91500 | 0.1168 | 0.1009 | | 0.05 | 3.54 | 92000 | 0.1088 | 0.1005 | | 0.0514 | 3.56 | 92500 | 0.1116 | 0.1000 | | 0.0467 | 3.58 | 93000 | 0.1053 | 0.0998 | | 0.045 | 3.6 | 93500 | 0.1099 | 0.1012 | | 0.0507 | 3.61 | 94000 | 0.1186 | 0.1012 | | 0.0452 | 3.63 | 94500 | 0.1119 | 0.0998 | | 0.0452 | 3.65 | 95000 | 0.1099 | 0.1002 | | 0.0452 | 3.67 | 95500 | 0.1228 | 0.1015 | | 0.0448 | 3.69 | 96000 | 0.1271 | 0.1025 | | 0.0485 | 3.71 | 96500 | 0.1338 | 0.1037 | | 0.048 | 3.73 | 97000 | 0.1288 | 0.1030 | | 0.0476 | 3.75 | 97500 | 0.1183 | 0.1012 | | 0.0457 | 3.77 | 98000 | 0.1171 | 0.1007 | | 0.0492 | 3.79 | 98500 | 0.1142 | 0.1004 | | 0.049 | 3.81 | 99000 | 0.1141 | 0.1006 | | 0.046 | 3.83 | 99500 | 0.1165 | 0.1007 | | 0.0444 | 3.85 | 100000 | 0.1173 | 0.1010 | | 0.0456 | 3.86 | 100500 | 0.1150 | 0.1004 | | 0.0467 | 3.88 | 101000 | 0.1130 | 0.1003 | | 0.0465 | 3.9 | 101500 | 0.1137 | 0.1003 | | 0.0451 | 3.92 | 102000 | 0.1127 | 0.1004 | | 0.0445 | 3.94 | 102500 | 0.1118 | 0.1003 | | 0.0453 | 3.96 | 103000 | 0.1112 | 0.1002 | | 0.0458 | 3.98 | 103500 | 0.1103 | 0.1002 | | 0.0454 | 4.0 | 104000 | 0.1101 | 0.1002 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0 - Datasets 1.13.3 - Tokenizers 0.10.3
ab03cf25836707eaa030a3bd92ac76f1
alexandrainst/da-offensive-detection-base
alexandrainst
xlm-roberta
7
4
transformers
3
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,481
false
# Danish Offensive Text Detection based on XLM-Roberta-Base This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a dataset consisting of approximately 5 million Facebook comments on [DR](https://dr.dk/)'s public Facebook pages. The labels have been automatically generated using weak supervision, based on the [Snorkel](https://www.snorkel.org/) framework. The model achieves SOTA on a test set consisting of 600 Facebook comments annotated using majority vote by three annotators, of which 35.8% were labelled as offensive: | **Model** | **Precision** | **Recall** | **F1-score** | **F2-score** | | :-------- | :------------ | :--------- | :----------- | :----------- | | `alexandrainst/da-offensive-detection-base` (this) | 74.81% | **89.77%** | **81.61%** | **86.32%** | | [`alexandrainst/da-offensive-detection-base`](https://huggingface.co/alexandrainst/da-offensive-detection-base) | 74.13% | 89.30% | 81.01% | 85.79% | | [`A&ttack`](https://github.com/ogtal/A-ttack) | **97.32%** | 50.70% | 66.67% | 56.07% | | [`alexandrainst/da-hatespeech-detection-small`](https://huggingface.co/alexandrainst/da-hatespeech-detection-small) | 86.43% | 56.28% | 68.17% | 60.50% | | [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 75.41% | 42.79% | 54.60% | 46.84% | ## Using the model You can use the model simply by running the following: ```python >>> from transformers import pipeline >>> offensive_text_pipeline = pipeline(model="alexandrainst/da-offensive-detection-base") >>> offensive_text_pipeline("Din store idiot") [{'label': 'Offensive', 'score': 0.9997463822364807}] ``` Processing multiple documents at the same time can be done as follows: ```python >>> offensive_text_pipeline(["Din store idiot", "ej hvor godt :)"]) [{'label': 'Offensive', 'score': 0.9997463822364807}, {'label': 'Not offensive', 'score': 0.9996451139450073}] ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - gradient_accumulation_steps: 1 - total_train_batch_size: 32 - seed: 4242 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - max_steps: 500000 - fp16: True - eval_steps: 1000 - early_stopping_patience: 100 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
b2b07cbf99e4f1eddaa2cf291fc8e6c8
shripadbhat/whisper-large-v2-lt
shripadbhat
whisper
15
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['lt']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,368
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large v2 Lithuanian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3421 - Wer: 29.9321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4255 | 0.09 | 100 | 0.4323 | 37.0310 | | 0.2976 | 0.18 | 200 | 0.3421 | 29.9321 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
bf4fd8bd82884891d33cf44992000960
jonatasgrosman/exp_w2v2t_zh-cn_vp-it_s132
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['zh-CN']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'zh-CN']
false
true
true
475
false
# exp_w2v2t_zh-cn_vp-it_s132 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5a9a7605093b12a4115e86b68e9589cb
anuj55/distilbert-base-uncased-finetuned-mrpc
anuj55
distilbert
13
11
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,551
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6236 - Accuracy: 0.8480 - F1: 0.8946 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.4371 | 0.8137 | 0.8746 | | No log | 2.0 | 460 | 0.4117 | 0.8431 | 0.8940 | | 0.4509 | 3.0 | 690 | 0.3943 | 0.8431 | 0.8908 | | 0.4509 | 4.0 | 920 | 0.5686 | 0.8382 | 0.8893 | | 0.1915 | 5.0 | 1150 | 0.6236 | 0.8480 | 0.8946 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.8.1+cu102 - Datasets 1.18.4 - Tokenizers 0.12.1
01694145c8613a86b8b86459848b09ac
cartesinus/xlm-r-base_leyzer_intent-en
cartesinus
xlm-roberta
11
10
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,668
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-r-base-leyzer-en-intent This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1995 - Accuracy: 0.9624 - F1: 0.9624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.9235 | 1.0 | 1061 | 1.5991 | 0.6680 | 0.6680 | | 0.8738 | 2.0 | 2122 | 0.7982 | 0.8359 | 0.8359 | | 0.4406 | 3.0 | 3183 | 0.4689 | 0.9132 | 0.9132 | | 0.2534 | 4.0 | 4244 | 0.3165 | 0.9360 | 0.9360 | | 0.1593 | 5.0 | 5305 | 0.2434 | 0.9507 | 0.9507 | | 0.108 | 6.0 | 6366 | 0.2104 | 0.9599 | 0.9599 | | 0.0914 | 7.0 | 7427 | 0.1995 | 0.9624 | 0.9624 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
44a6edfafe29cac8d0bd65958514ea44
VictorAyora/roberta-base-bne-clasificacion-de-texto-supervisado
VictorAyora
roberta
13
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,322
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-clasificacion-de-texto-supervisado This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - Accuracy: 0.9337 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1955 | 1.0 | 1250 | 0.1809 | 0.9307 | | 0.0979 | 2.0 | 2500 | 0.2290 | 0.9337 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
3dd54218317729c121a4919324b65ebe
Guizmus/SD_PoW_Collection
Guizmus
null
58
0
EveryDream
11
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
6
0
5
1
0
0
0
['stable-diffusion', 'text-to-image', 'image-to-image']
false
true
true
11,631
false
![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/showcase_PoW_neverendingloop.jpg) # Intro This is a collection of models related to the "Picture of the Week" contest on Stable Diffusion discord. I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "PoW Style" and I balance the learning on the low side, so that it doesn't just replicate creations. I also make smaller quality models to help make pictures for the contest itself, based on the theme. # 29 novembre 2022, "The Stable Kitchen" ## Theme : Burgers and Fries Welcome to the VERY FIRST edition of the most Stable Kitchen in the universe! On today’s menu will be Sandwiches & Frie. Since you’re here for the first time, I will explain how it works! You can generate your orders and we will make them for you. Take a seat, flip through the menu, bring all of your favorite ingredients~ * The sandwich with the most cheddar? 5 beef burgers? An infinite fries generator? * Serve us your best sandwich and fries combo! Not even the sky's the limit my friend, You want it? You have it! As long as it's delicious, of course! We’ll see you on the chopping block for this week’s Stable Kitchen! ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/images/theme.png) ## Models ### Burgy ![Burgy](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/images/showcase_burgy.jpg) * Burgers, burgers burgers * training: 40 pictures, 6 epochs of 40 repeats, batch size 6, LR1e-6, EveryDream * balance : Strong, burgers * **Activation token :** `Burgy` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/ckpts/Burgy.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/dataset_Burgy.zip) # 22 novembre 2022, "Imaginary Friend" ## Theme : Imaginary Friend Do you remember putting your hands into what seemed as if it were just plain air and giggling like a child? Having conversations with someone who “wasn’t there”? Nowadays the term “Imaginary Friend” isn’t as frequently used as it used to be, right? Let’s bring it back. * Can you build your Imaginary Friends actualized? * What traits do you recall of them? Are they still young? Have they grown up now? Do they resemble you, or a creature that isn’t human? * Where would you find this Imaginary Friend? Where do they reside? What do they stand for? Our prompt for this event was created by @Andrekerygma "a boy drinking tea with a cute monster on the bedroom, disney infinity character design, pixar, artstation, vinyl, toy, figurine, 3 d model, cinema 4 d, substance 3 d painter, vray, unreal engine 5, octane render, cinematic" ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/images/theme.png) ## Models ### PoW ArtStyle 22-11-22 ![PoW ArtStyle](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/images/showcase_pow_imaginary_friend.jpg) * based on all the submissions to the PoW * training: 73 pictures, 6000 steps on batch 6, 1e-6 polynomial LR. * balance : a little lighter on the style than last week, still manages to reproduce most participants * **Activation token :** `PoW ArtStyle` * Other noticable tokens : Your Discord username, if you participated. Also TMNT,NikeAir Shoes and Sid, Ice Age movie * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/ckpts/PoWArtStyle_ImaginaryFriend.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/PoW_221122_dataset.zip) ### CharacterChan Style ![CharacterChan Style](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_CharacterChanStyle-v1.jpg) * based on the "Character" dreamer community of the Stable Diffusion Discord * training: 50 pictures, 160 total repeat, LR1e-6 * balance : correct, but some sub concepts have overtrain a little, like the clown. * **Activation token :** `CharacterChan Style` * [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CharacterChanStyle-v1.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CharacterChanStyle-v1.zip) * [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection#characterchan-style) ### CreatureChan Style ![CreatureChan Style](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_CreatureChanStyle-v1.jpg) * based on the "Creature" dreamer community of the Stable Diffusion Discord * training: 50 pictures, 160 total repeat, LR1e-6 * balance : good * **Activation token :** `CreatureChan Style` * [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CreatureChanStyle-v1.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CreatureChanStyle-v1.zip) * [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection#creaturechan-style) # 14 novembre 2022, "The Never-Ending Loop" ## Theme : The Never-Ending Loop It is a passed-down proverb that lines represent the flow of time itself. They converge and take shape. They twist, tangle, sometimes unravel, break, and then connect again. * Without words, how are we able to accurately represent this flow of time with only lines? geometrically, intricately, asymmetricaly, seamlessly, ornately... * Think of a never-ending pattern, texture, or shape– looping on and on for what feels infinite. * Just how detailed are you able to get with your patterns? Our prompt for this event was created by @Asukii ! "the fractal flow of time stretches towards the horizon, surreal fractal intertwined looping pathways, dramatic cinematic perspective, detailed delicate intricate ornate linework, geometric abstract masterwork digital art, quantum wavetracing, ink drawing, optical illusion" ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/theme1.png) ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/theme2.png) ## Models ### PoW Style 14-11-22 ![PoW Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/showcase_PoW_neverendingloop.jpg) * based on all the submissions to the PoW * training: 101 pictures, 9000 steps on batch 6, 1e-6 polynomial LR. * balance : a little strong on the style but it made it possible to differentiate each participants * **Activation token :** `PoW Style` * Other noticable tokens : Your Discord username, if you participated. Also Rick Roll and "fullbody shot" * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/PoWStyle_NeverEndingLoop.ckpt) * [Diffusers : Guizmus/SD_PoW_Collection/141122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/141122/diffusers/) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_2_dataset.zip) ### Fractime Style ![Fractime Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/showcase_FractimeStyle.jpg) * based on the suggested prompt and theme * training: 50 pictures, 1750 steps on batch 6, 1e-6 polynomial LR. * balance : correct, but the style doesn't apply to every subject * **Activation token :** `Fractime Style` * Other noticable tokens : intricate, nebula, illusion, person, road, tree, boat * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/FractimeStyle.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_1_dataset.zip) # 09 novembre 2022, "Abstralities" ## Theme : Abstract Realities Glitch, warp, static, shape, flicker, break, bend, mend Have you ever felt your reality shift out from under your feet? Our perception falters and repairs itself in the blink of an eye. Just how much do our brains influence what we perceive? How much control do we have over molding these realities? With the introduction of AI and its rapid pace taking the world by storm, we are seeing single-handedly just how these realities can bring worlds into fruition. * Can you show us your altered reality? * Are these realities truly broken, or only bent? Our example prompt for this event was created by @Aether ! "household objects floating in space, bedroom, furniture, home living, warped reality, cosmic horror, nightmare, retrofuturism, surrealism, abstract, illustrations by alan nasmith" ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/AETHER.png) ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/aether2.png) ## Models ### PoW Style 09-11-22 ![PoW Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_pow_final.jpg) * Main model based on all the results from the PoW * training: 51 pictures, 3000 steps on 1e-6 polynomial LR. * balanced on the light side, add attention/weight on the activation token * **Activation token :** `PoW Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_Abstralities.ckpt) * [Diffusers : Guizmus/SD_PoW_Collection/091122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/091122/diffusers/) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/dataset.zip) ### Bendstract Style ![Bendstract Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_bendstract.jpg) * based on the suggested prompt * training: 100 pictures, 7500 steps on 1e-6 polynomial LR. overtrained * **Activation token :** `Bendstract Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/Bendstract-v1.ckpt) ### endingReality Style ![BendingReality Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_bendingreality.jpg) * based on the suggested prompt * training: 68 pictures, 6000 steps on 1e-6 polynomial LR. overtrained * **Activation token :** `BendingReality Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/BendingReality_Style-v1.ckpt) ### PoW Style mid-submissions 09-11-22 ![PoW Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_pow_midrun.jpg) * based on the first few submissions * training: 24 pictures, 2400 steps on 1e-6 polynomial LR. a little too trained * **Activation token :** `PoW Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_midrun.ckpt) # License These models are open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
396a723c403a897d5eb487c7e489e86e
popedriver/email-newsletter-model
popedriver
bert
10
9
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
924
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # email-newsletter-model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
7de6772bc9dbfdbbd28a63f03e39a00a
mrm8488/deberta-v3-base-goemotions
mrm8488
deberta-v2
12
658
transformers
1
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,455
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base-goemotions This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7610 - F1: 0.4468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.5709 | 1.0 | 6164 | 1.5211 | 0.4039 | | 1.3689 | 2.0 | 12328 | 1.5466 | 0.4198 | | 1.1819 | 3.0 | 18492 | 1.5670 | 0.4520 | | 1.0059 | 4.0 | 24656 | 1.6673 | 0.4479 | | 0.8129 | 5.0 | 30820 | 1.7610 | 0.4468 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
cd9631467df437ef8d5479e97d54a909
salascorp/categorizacion_comercios_v_0.0.7
salascorp
bert
13
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['text-classification', 'generated_from_trainer']
true
true
true
1,023
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # categorizacion_comercios_v_0.0.7 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.4673 - Accuracy: 0.9125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0+cpu - Datasets 2.6.1 - Tokenizers 0.13.1
6bbe2adfbabe6ecd3f34ee47b41d659e
sd-concepts-library/lolo
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
970
false
### Lolo on Stable Diffusion This is the `<lolo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<lolo> 0](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/1.jpeg) ![<lolo> 1](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/2.jpeg) ![<lolo> 2](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/3.jpeg) ![<lolo> 3](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/0.jpeg)
920007b709e11d00d210ef9ddcda06f0
google/multiberts-seed_1-step_1800k
google
bert
8
16
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1800k']
false
true
true
3,527
false
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1, captured at step 1800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
c823fa7b1447bcc14be0f5d55bd998d4
sd-concepts-library/cologne
sd-concepts-library
null
10
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,126
false
### cologne on Stable Diffusion This is the `<cologne-dom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cologne-dom> 0](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/3.jpeg) ![<cologne-dom> 1](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/0.jpeg) ![<cologne-dom> 2](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/2.jpeg) ![<cologne-dom> 3](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/1.jpeg) ![<cologne-dom> 4](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/4.jpeg)
6d7bad6d364e550b5c4cb288e0aa74dc
amartyobanerjee/mt5-small-finetuned-amazon-en-es
amartyobanerjee
mt5
13
3
transformers
0
summarization
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
1,995
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0294 - Rouge1: 16.497 - Rouge2: 8.0618 - Rougel: 16.2979 - Rougelsum: 16.1465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.5928 | 1.0 | 1209 | 3.3005 | 14.7843 | 6.5518 | 14.2805 | 14.2951 | | 3.9024 | 2.0 | 2418 | 3.1399 | 16.8202 | 8.6739 | 16.1194 | 16.0844 | | 3.5806 | 3.0 | 3627 | 3.0869 | 18.1223 | 9.3051 | 17.7533 | 17.7254 | | 3.4201 | 4.0 | 4836 | 3.0590 | 17.654 | 9.0154 | 17.1853 | 17.1769 | | 3.3202 | 5.0 | 6045 | 3.0598 | 17.612 | 8.6707 | 17.4662 | 17.2963 | | 3.2436 | 6.0 | 7254 | 3.0409 | 16.7938 | 8.3054 | 16.6141 | 16.4853 | | 3.2079 | 7.0 | 8463 | 3.0332 | 16.7246 | 8.2362 | 16.5065 | 16.3611 | | 3.1801 | 8.0 | 9672 | 3.0294 | 16.497 | 8.0618 | 16.2979 | 16.1465 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
1552be2a80124a6f5ee5f3866dec96ea
plasmo/colorjizz-512px
plasmo
null
44
4
diffusers
4
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
1
1
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
957
false
### Colorjizz-512px v.1.0 for Stable Diffusion 1.5 Colorjizz Image Pack brought to you using 130 training images (512 resolution) 8000 training steps, 30% Training text modeled with permission using creations inspired by Destiny K (Twitter: @destinykrainbow) ### To Activate: include "colorjizz" in your prompt. ### NOTE: Colorjizz-768px version recommended for higher resolution and available [HERE](https://huggingface.co/plasmo/colorjizz-768px) Sample pictures of this concept (512px model): ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00223.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00224.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00225.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00226.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00227.jpg)
f79cbf9f952c352e287ea6e8476bb24b
lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class
lmvasque
bert
13
3
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
6,076
false
## Readability benchmark (ES): mbert-es-paragraphs-2class This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish". You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). ## Models Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets. You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link). These are the available models you can use (current model page in bold): | Model | Granularity | # classes | |-----------------------------------------------------------------------------------------------------------|----------------|:---------:| | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 | | **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class)** | **paragraphs** | **2** | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 | For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training. ## Results These are our results for all the readability models in different settings. Please select your model based on the desired performance: | Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) | |-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:| | Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 | | Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 | | Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 | | Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 | | Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 | | Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 | | Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** | | Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 | | Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 | | Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** | | Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 | | Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 | | Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 | | Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 | ## Citation If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published) ``` @inproceedings{vasquez-rodriguez-etal-2022-benchmarking, title = "A Benchmark for Neural Readability Assessment of Texts in Spanish", author = "V{\'a}squez-Rodr{\'\i}guez, Laura and Cuenca-Jim{\'\e}nez, Pedro-Manuel and Morales-Esquivel, Sergio Esteban and Alva-Manchego, Fernando", booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022", month = dec, year = "2022", } ```
3a0eb1787cd268fff6fa4891c710ae5f
sd-concepts-library/naf
sd-concepts-library
null
10
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,052
false
### naf on Stable Diffusion This is the `<nal>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<nal> 0](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/3.jpeg) ![<nal> 1](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/0.jpeg) ![<nal> 2](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/2.jpeg) ![<nal> 3](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/1.jpeg) ![<nal> 4](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/4.jpeg)
e616c75f07bd4d9fa402390d0dbe3bb3
sreddy1/t5-end2end-questions-generation-full
sreddy1
t5
6
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,172
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation-full This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5811 | 0.34 | 100 | 1.8916 | | 1.9668 | 0.68 | 200 | 1.7116 | | 1.8274 | 1.02 | 300 | 1.6512 | | 1.7424 | 1.36 | 400 | 1.6294 | | 1.7076 | 1.69 | 500 | 1.6024 | | 1.7001 | 2.03 | 600 | 1.5916 | | 1.6266 | 2.37 | 700 | 1.5881 | | 1.6275 | 2.71 | 800 | 1.5772 | | 1.6146 | 3.05 | 900 | 1.5824 | | 1.5699 | 3.39 | 1000 | 1.5776 | | 1.5635 | 3.73 | 1100 | 1.5710 | | 1.5484 | 4.07 | 1200 | 1.5698 | | 1.5199 | 4.41 | 1300 | 1.5616 | | 1.5352 | 4.75 | 1400 | 1.5661 | | 1.5174 | 5.08 | 1500 | 1.5633 | | 1.4955 | 5.42 | 1600 | 1.5603 | | 1.4904 | 5.76 | 1700 | 1.5631 | | 1.5033 | 6.1 | 1800 | 1.5572 | | 1.4853 | 6.44 | 1900 | 1.5588 | | 1.4679 | 6.78 | 2000 | 1.5588 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
6de847c0d793da7ea7d30e7b2ed4212f
AlekseyKorshuk/6.7b-dalio-book-handwritten-io-constant-3e-7-v2
AlekseyKorshuk
opt
13
2
transformers
0
text-generation
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,015
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-book-handwritten-io-constant-3e-7-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5293 - Accuracy: 0.2725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5856 | 0.08 | 6 | 2.5957 | 0.2697 | | 2.6027 | 0.16 | 12 | 2.5938 | 0.2698 | | 2.619 | 0.24 | 18 | 2.5879 | 0.2700 | | 2.6121 | 0.32 | 24 | 2.5840 | 0.2702 | | 2.6024 | 0.4 | 30 | 2.5762 | 0.2706 | | 2.5878 | 0.48 | 36 | 2.5703 | 0.2707 | | 2.5541 | 0.56 | 42 | 2.5625 | 0.2710 | | 2.5207 | 0.64 | 48 | 2.5566 | 0.2713 | | 2.4577 | 0.72 | 54 | 2.5488 | 0.2715 | | 2.5614 | 0.8 | 60 | 2.5430 | 0.2718 | | 2.6959 | 0.88 | 66 | 2.5352 | 0.2722 | | 2.5084 | 0.96 | 72 | 2.5293 | 0.2725 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.12.1
67c7622e0bc6afa5c467168cc97efdb4
pollner/test_trainer
pollner
distilbert
13
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,243
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4375 - Rmse: 0.6614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0663 | 1.0 | 2639 | 0.5119 | 0.7155 | | 0.3704 | 2.0 | 5278 | 0.4375 | 0.6614 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
3dc10e96cd1a1a6093bcdcf34b164466
lgris/sew-tiny-portuguese-cv7
lgris
sew
25
3
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
true
true
true
3,775
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-tiny-portuguese-cv7 This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4232 - Wer: 0.2745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | No log | 2.6 | 1000 | 1.0034 | 0.7308 | | 4.1307 | 5.19 | 2000 | 0.6274 | 0.4721 | | 4.1307 | 7.79 | 3000 | 0.5541 | 0.4130 | | 1.3117 | 10.39 | 4000 | 0.5302 | 0.3880 | | 1.3117 | 12.99 | 5000 | 0.5082 | 0.3644 | | 1.2047 | 15.58 | 6000 | 0.4818 | 0.3539 | | 1.2047 | 18.18 | 7000 | 0.4822 | 0.3477 | | 1.14 | 20.78 | 8000 | 0.4781 | 0.3428 | | 1.14 | 23.38 | 9000 | 0.4840 | 0.3401 | | 1.0818 | 25.97 | 10000 | 0.4613 | 0.3251 | | 1.0818 | 28.57 | 11000 | 0.4569 | 0.3257 | | 1.0451 | 31.17 | 12000 | 0.4494 | 0.3132 | | 1.0451 | 33.77 | 13000 | 0.4560 | 0.3201 | | 1.011 | 36.36 | 14000 | 0.4687 | 0.3174 | | 1.011 | 38.96 | 15000 | 0.4397 | 0.3122 | | 0.9785 | 41.56 | 16000 | 0.4605 | 0.3173 | | 0.9785 | 44.16 | 17000 | 0.4380 | 0.3064 | | 0.9458 | 46.75 | 18000 | 0.4372 | 0.3048 | | 0.9458 | 49.35 | 19000 | 0.4426 | 0.3039 | | 0.9126 | 51.95 | 20000 | 0.4317 | 0.2962 | | 0.9126 | 54.54 | 21000 | 0.4345 | 0.2960 | | 0.8926 | 57.14 | 22000 | 0.4365 | 0.2948 | | 0.8926 | 59.74 | 23000 | 0.4306 | 0.2940 | | 0.8654 | 62.34 | 24000 | 0.4303 | 0.2928 | | 0.8654 | 64.93 | 25000 | 0.4351 | 0.2915 | | 0.8373 | 67.53 | 26000 | 0.4340 | 0.2909 | | 0.8373 | 70.13 | 27000 | 0.4279 | 0.2907 | | 0.83 | 72.73 | 28000 | 0.4214 | 0.2867 | | 0.83 | 75.32 | 29000 | 0.4256 | 0.2849 | | 0.8062 | 77.92 | 30000 | 0.4281 | 0.2826 | | 0.8062 | 80.52 | 31000 | 0.4398 | 0.2865 | | 0.7846 | 83.12 | 32000 | 0.4218 | 0.2812 | | 0.7846 | 85.71 | 33000 | 0.4227 | 0.2791 | | 0.7697 | 88.31 | 34000 | 0.4200 | 0.2767 | | 0.7697 | 90.91 | 35000 | 0.4285 | 0.2791 | | 0.7539 | 93.51 | 36000 | 0.4238 | 0.2777 | | 0.7539 | 96.1 | 37000 | 0.4288 | 0.2757 | | 0.7413 | 98.7 | 38000 | 0.4205 | 0.2748 | | 0.7413 | 101.3 | 39000 | 0.4241 | 0.2761 | | 0.7348 | 103.89 | 40000 | 0.4232 | 0.2745 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
d935a6360d9a953a5cac221e84c38f77
Mehtap/whisper-base-2023-01-31
Mehtap
whisper
16
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['tr']
null
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
2,009
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # base Turkish Whisper (bTW) This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Ermetal Meetings dataset. It achieves the following results on the evaluation set: - Loss: 0.8800 - Wer: 0.8060 - Cer: 0.7585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.8904 | 1.32 | 100 | 1.5873 | 0.8893 | 0.5437 | | 0.8039 | 2.63 | 200 | 0.9239 | 0.9076 | 0.5721 | | 0.5988 | 3.95 | 300 | 0.7970 | 0.7850 | 0.4821 | | 0.384 | 5.26 | 400 | 0.7586 | 0.7164 | 0.5206 | | 0.2643 | 6.58 | 500 | 0.7578 | 0.9130 | 0.6843 | | 0.2026 | 7.89 | 600 | 0.7627 | 0.9147 | 0.7228 | | 0.1091 | 9.21 | 700 | 0.8043 | 0.8363 | 0.8283 | | 0.0623 | 10.53 | 800 | 0.8342 | 0.7615 | 0.7619 | | 0.0436 | 11.84 | 900 | 0.8577 | 0.7079 | 0.6824 | | 0.0348 | 13.16 | 1000 | 0.8800 | 0.8060 | 0.7585 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.12.0+cu102 - Datasets 2.9.0 - Tokenizers 0.13.2
5347f599a74a4de3641bfef0a9057635
hr16/noah-titan-5000-8e-7
hr16
null
18
6
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
0
2
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
535
false
### Model Dreambooth concept Noah_Titan_5000_8e-7 được train bởi hr16 bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br> Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br> Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Ảnh mẫu của concept: WIP
519afe4580a0ae6a7c0c94a7d9da5e06
okho0653/distilbert-base-zero-shot
okho0653
distilbert
11
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,132
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-zero-shot This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7147 - eval_accuracy: 0.0741 - eval_f1: 0.1379 - eval_runtime: 1.1794 - eval_samples_per_second: 22.894 - eval_steps_per_second: 1.696 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
a56ca1ff88223b2b4d64a597d0ad7f64
MultiBertGunjanPatrick/multiberts-seed-1-900k
MultiBertGunjanPatrick
bert
7
2
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-1']
false
true
true
6,483
false
# MultiBERTs Seed 1 Checkpoint 900k (uncased) Seed 1 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-900k') model = BertModel.from_pretrained("multiberts-seed-1-900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
6b88e5ca4773604fdbc1fcbbdfb77917
DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1
DrishtiSharma
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['bas']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'bas', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
2,689
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bas-v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset. It achieves the following results on the evaluation set: - Loss: 0.5997 - Wer: 0.3870 ### Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000111 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 12.7076 | 5.26 | 200 | 3.6361 | 1.0 | | 3.1657 | 10.52 | 400 | 3.0101 | 1.0 | | 2.3987 | 15.78 | 600 | 0.9125 | 0.6774 | | 1.0079 | 21.05 | 800 | 0.6477 | 0.5352 | | 0.7392 | 26.31 | 1000 | 0.5432 | 0.4929 | | 0.6114 | 31.57 | 1200 | 0.5498 | 0.4639 | | 0.5222 | 36.83 | 1400 | 0.5220 | 0.4561 | | 0.4648 | 42.1 | 1600 | 0.5586 | 0.4289 | | 0.4103 | 47.36 | 1800 | 0.5337 | 0.4082 | | 0.3692 | 52.62 | 2000 | 0.5421 | 0.3861 | | 0.3403 | 57.88 | 2200 | 0.5549 | 0.4096 | | 0.3011 | 63.16 | 2400 | 0.5833 | 0.3925 | | 0.2932 | 68.42 | 2600 | 0.5674 | 0.3815 | | 0.2696 | 73.68 | 2800 | 0.5734 | 0.3889 | | 0.2496 | 78.94 | 3000 | 0.5968 | 0.3985 | | 0.2289 | 84.21 | 3200 | 0.5888 | 0.3893 | | 0.2091 | 89.47 | 3400 | 0.5849 | 0.3852 | | 0.2005 | 94.73 | 3600 | 0.5938 | 0.3875 | | 0.1876 | 99.99 | 3800 | 0.5997 | 0.3870 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
caafdc02df988cf525f0f8868ce37e6e
Katrzyna/old
Katrzyna
bert
14
5
transformers
0
fill-mask
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,286
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-basil This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9097 | 1.0 | 780 | 1.4978 | | 1.5358 | 2.0 | 1560 | 1.3439 | | 1.4259 | 3.0 | 2340 | 1.2881 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
67bf0325a4b490d9dd4f93e9aa49434e
XLab/rst-information-extraction-11b
XLab
t5
6
13
transformers
3
text2text-generation
true
false
false
afl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
11,247
false
<p align="center"> <br> <img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/> <br> </p> # reStructured Pre-training (RST) official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html) #### RST is a new paradigm for language pre-training, which * unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model, * surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc) * achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam In such a pre-training paradigm, * Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing * Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access. ## Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | **rst-information-extraction-11b** | **Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity** | **Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains**| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks | ## Have a try? ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b") model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b") inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` ## Data for reStructure Pre-training This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research. We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals. ###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush: | Mine | Signal | #Sample | Use in DataLab | Some Applications | | --- | --- | --- | --- | --- | | [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion| | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification | | [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning| | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion | | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning | | [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation| | [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition| [Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation | | [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference| |[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension| | [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension| | [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension | | [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension| | [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension| | [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension | | [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification| | [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion| | [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition| | [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion| ## Bibtext for Citation Info ``` @article{yuan2022restructured, title={reStructured Pre-training}, author={Yuan, Weizhe and Liu, Pengfei}, journal={arXiv preprint arXiv:2206.11147}, year={2022} } ```
1e3b16b58c38c6913169686645b085b9
Gladiator/albert-large-v2_ner_wnut_17
Gladiator
albert
12
6
transformers
0
token-classification
true
false
false
apache-2.0
null
['wnut_17']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,705
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-large-v2_ner_wnut_17 This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2429 - Precision: 0.7446 - Recall: 0.5335 - F1: 0.6216 - Accuracy: 0.9582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.3051 | 0.7929 | 0.3206 | 0.4566 | 0.9410 | | No log | 2.0 | 426 | 0.2151 | 0.7443 | 0.4665 | 0.5735 | 0.9516 | | 0.17 | 3.0 | 639 | 0.2310 | 0.7364 | 0.5012 | 0.5964 | 0.9559 | | 0.17 | 4.0 | 852 | 0.2387 | 0.7564 | 0.5311 | 0.6240 | 0.9578 | | 0.0587 | 5.0 | 1065 | 0.2429 | 0.7446 | 0.5335 | 0.6216 | 0.9582 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
9bd0c7e60da9286b8754cba93ef81fa3
amitkayal/whisper-tiny-hi
amitkayal
whisper
25
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['hi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,435
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-hi This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7990 - Wer: 43.8869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1747 | 7.02 | 1000 | 0.5674 | 41.6800 | | 0.0466 | 14.03 | 2000 | 0.7042 | 43.7378 | | 0.0174 | 22.0 | 3000 | 0.7990 | 43.8869 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.10.0 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
1c6c82a988a6517686dbe6a7ba7f64f7
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
anas-awadalla
bert
16
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,000
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
281f4ddf3de571d1eba06d86b0b294b5
heegyu/kogpt-neox-tiny
heegyu
gpt_neox
8
5
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,374
false
PoC를 위해 학습한 작은 GPT 모델 ## 모델 구성 - GPT-Neo-X, Pytorch - 2 Layers, 512 hidden dim, 2048 intermediate, 8 heads, 8000 vocab size - 512 max_seq_len - 모델 크기: 13M ## 학습 데이터셋 - AIHub SNS 대화(747MB) - AIHub 구어체(435MB) - 한국어 위키(773MB) - 나무위키(5.8GB) - 국립국어원 메신저 대화(21MB) ## 학습 환경 및 하이퍼파라미터 - NVIDIA Tesla T4(16GB VRAM) - fp 16, deepspeed stage2 - 350000 steps, 2일 17시간 소요 - batch size 32 - learning rate 5e-5, linear scheduler - 최종 train loss: 3.684 - 학습 코드: https://github.com/HeegyuKim/language-model ### <details> <summary>deepspeed parameter</summary> <div markdown="1"> ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 5e8, "reduce_scatter": true, "reduce_bucket_size": 5e8, "overlap_comm": true, "contiguous_gradients": true }, "train_micro_batch_size_per_gpu": "auto", "train_batch_size": "auto", "steps_per_print": 1000 } ``` </div> </details> ### example ```python from transformers import pipeline generator = pipeline('text-generation', model='heegyu/kogpt-neox-tiny') def generate(prefix: str): print(generator(prefix, do_sample=True, top_p=0.6, repetition_penalty=1.4, max_length=128, penalty_alpha=0.6)[0]["generated_text"]) generate("0 : 만약 오늘이 ") generate("오늘 정부가 발표한 내용에 따르면") generate("수학이란 학자들의 정의에 따라") generate("영상 보는데 너무 웃겨 ") ``` 실행 결과 ``` 0 : 만약 오늘이 #@이름#가 먼저 자는거면 또 자는건데ㅋ 1 : ㅇㄷㆍ_== 2 : 아까 아침에 일어났어?!! 3 : 아니아니 근데 이따 시간표가 끝날때까지 잤지않게 일주일동안 계속 잠들었엉.. 나도 지금 일어났는데, 너무 늦을듯해. 그러다 다시 일어나서 다행이다 4 : 어차피 2:30분에 출발할것같아요~ 5 : 이제 곧 일어낫어요 오늘 정부가 발표한 내용에 따르면, 한참 여부는 "한숨이 살릴 수 있는 게 무엇인가"라는 질문에 대해 말할 것도 없다. 하지만 그건 바로 이러한 문제 때문일 것이다." 실제로 해당 기사에서 나온 바 있다. 실제로 당시 한국에서 이게 사실이 아니라고 밝혔다는 건데도 불구하고 말이다. 기사화되기는 했는데 '한국어'의 경우에도 논란이 있었다. 사실 이 부분만 언급되어있고, 대한민국은 무조건 비난을 하는 것이 아니라 본인의 실수를 저지른다는 것인데 반해 유튜브 채널의 영상에서는 그냥 저런 댓글이 올라오 수학이란 학자들의 정의에 따라 이 교과서에서 교육하는 경우가 많은데, 그 이유는 교수들(실제로 학생들은 공부도 하교할 수 있는 등)을 학교로 삼아 강의실에서 듣기 때문이다. 이 학교의 교사들이 '학교'를 선택한 것은 아니지만 교사가 "학생들의"라는 뜻이다."라고 한다. 하지만 이쪽은 교사와 함께 한 명씩 입학식 전부터 교사의 인생들을 시험해보고 싶다는 의미다. 또한 수학여행에서는 가르칠 수도 있고 수학여행을 갔거나 전공 과목으로 졸업하고 교사는 다른 영상 보는데 너무 웃겨 #@기타#웃기네 0 : ㅋㅌㄱㆍ이별명인듯 1 : ㅠㅜ그렇지뭐? 아빠는 아니고? 왜케 많음... 나도 그럴수가 없어.. 내가 말한건데 ㅎ,, #@이름#씨에스나게놀아주까봐 어제부터 내맘대로해달라햇어 그래서 우리집에서 안쓰럽거든 근데 진짜 많이해서 걱정하지말라고 해줬으면좋 ```
4da7ea74eb917522d5865402a6824e75
dropout05/t5-realnewslike-super-tiny
dropout05
t5
8
4
transformers
0
text2text-generation
false
false
true
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
518
false
**Don't use this model for any applied task. It too small to be practically useful. It is just a part of a weird research project.** An extremely small version of T5 with these parameters ```python "d_ff": 1024, "d_kv": 64, "d_model": 256, "num_heads": 4, "num_layers": 1, # yes, just one layer ``` The model was pre-trained on `realnewslike` subset of C4 for 1 epoch with sequence length `64`. Corresponding WandB run: [click](https://wandb.ai/guitaricet/t5-lm/runs/2yvuxsfz?workspace=user-guitaricet).
4221eee6cb6bfa6c6d4ee1723a1e9466
ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-20_09_36
ali2066
distilbert
13
10
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,796
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_token_itr0_2e-05_all_16_02_2022-20_09_36 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1743 - Precision: 0.3429 - Recall: 0.3430 - F1: 0.3430 - Accuracy: 0.9446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.3322 | 0.0703 | 0.1790 | 0.1010 | 0.8318 | | No log | 2.0 | 76 | 0.2644 | 0.1180 | 0.2343 | 0.1570 | 0.8909 | | No log | 3.0 | 114 | 0.2457 | 0.1624 | 0.2583 | 0.1994 | 0.8980 | | No log | 4.0 | 152 | 0.2487 | 0.1486 | 0.2583 | 0.1887 | 0.8931 | | No log | 5.0 | 190 | 0.2395 | 0.1670 | 0.2694 | 0.2062 | 0.8988 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
3562bdc6605097cdb96d75303f3924ba
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s756
jonatasgrosman
unispeech-sat
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pt']
false
true
true
463
false
# exp_w2v2t_pt_unispeech-sat_s756 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
91871a876a2b7c0495d5d86b66645aff
riddhi17pawar/distilbert-base-uncased-finetuned
riddhi17pawar
distilbert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['twitter-sentiment-analysis']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,090
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the twitter-sentiment-analysis dataset. It achieves the following results on the evaluation set: - Loss: 0.4337 - Accuracy: 0.812 - Precision: 0.7910 - F1: 0.8042 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
e9659bcf6d7affd3991a0bddb0fbb2bc
arnolfokam/mbert-base-uncased-ner-kin
arnolfokam
bert
9
16
transformers
0
token-classification
true
false
false
apache-2.0
['kin']
['masakhaner']
null
0
0
0
0
0
0
0
['NER']
false
true
true
2,380
false
# Model description **mbert-base-uncased-ner-kin** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities: - dates & time (DATE) - Location (LOC) - Organizations (ORG) - Person (PER) # Intended Use - Intended to be used for research purposes concerning Named Entity Recognition for African Languages. - Not intended for practical purposes. # Training Data This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups. # Training procedure This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com) #### Hyperparameters - **Learning Rate:** 5e-5 - **Batch Size:** 32 - **Maximum Sequence Length:** 164 - **Epochs:** 30 # Evaluation Data We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding. # Metrics - Precision - Recall - F1-score # Limitations - The size of the pre-trained language model prevents its usage in anything other than research. - Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system. - The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance. # Caveats and Recommendations - The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus. # Results Model Name| Precision | Recall | F1-score -|-|-|- **mbert-base-uncased-ner-kin**| 81.95 |81.55 |81.75 # Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin") model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Rayon Sports yasinyishije rutahizamu w’Umurundi" ner_results = nlp(example) print(ner_results) ```
dc157ba2f5588f79d3b2351bb07aee57
clarin-pl/FastPDN-distiluse
clarin-pl
distilbert
8
11
transformers
0
token-classification
true
false
false
cc-by-4.0
['pl']
['clarin-pl/kpwr-ner']
null
0
0
0
0
0
0
0
['ner']
false
true
true
2,324
false
# FastPDN FastPolDeepNer is model for Named Entity Recognition, designed for easy use, training and configuration. The forerunner of this project is [PolDeepNer2](https://gitlab.clarin-pl.eu/information-extraction/poldeepner2). The model implements a pipeline consisting of data processing and training using: hydra, pytorch, pytorch-lightning, transformers. Source code: https://gitlab.clarin-pl.eu/grupa-wieszcz/ner/fast-pdn ## How to use Here is how to use this model to get Named Entities in text: ```python from transformers import pipeline ner = pipeline('ner', model='clarin-pl/FastPDN', aggregation_strategy='simple') text = "Nazywam się Jan Kowalski i mieszkam we Wrocławiu." ner_results = ner(text) for output in ner_results: print(output) {'entity_group': 'nam_liv_person', 'score': 0.9996054, 'word': 'Jan Kowalski', 'start': 12, 'end': 24} {'entity_group': 'nam_loc_gpe_city', 'score': 0.998931, 'word': 'Wrocławiu', 'start': 39, 'end': 48} ``` Here is how to use this model to get the logits for every token in text: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("clarin-pl/FastPDN") model = AutoModelForTokenClassification.from_pretrained("clarin-pl/FastPDN") text = "Nazywam się Jan Kowalski i mieszkam we Wrocławiu." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The FastPDN model was trained on datasets (with 82 class versions) of kpwr and cen. Annotation guidelines are specified [here](https://clarin-pl.eu/dspace/bitstream/handle/11321/294/WytyczneKPWr-jednostkiidentyfikacyjne.pdf). ## Pretraining FastPDN models have been fine-tuned, thanks to pretrained models: - [herbert-base-case](https://huggingface.co/allegro/herbert-base-cased) - [distiluse-base-multilingual-cased-v1](sentence-transformers/distiluse-base-multilingual-cased-v1) ## Evaluation Runs trained on `cen_n82` and `kpwr_n82`: | name |test/f1|test/pdn2_f1|test/acc|test/precision|test/recall| |---------|-------|------------|--------|--------------|-----------| |distiluse| 0.53 | 0.61 | 0.95 | 0.55 | 0.54 | | herbert | 0.68 | 0.78 | 0.97 | 0.7 | 0.69 | ## Authors - Grupa Wieszcze CLARIN-PL - Wiktor Walentynowicz ## Contact - Norbert Ropiak (norbert.ropiak@pwr.edu.pl)
eff5e57ec2b7c34bfcb6aabecfe5124f
muhtasham/tiny-mlm-glue-mrpc-target-glue-qqp
muhtasham
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,162
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-mrpc-target-glue-qqp This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-mrpc](https://huggingface.co/muhtasham/tiny-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4096 - Accuracy: 0.7995 - F1: 0.7718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5796 | 0.04 | 500 | 0.5174 | 0.7297 | 0.6813 | | 0.5102 | 0.09 | 1000 | 0.4804 | 0.7541 | 0.7035 | | 0.4957 | 0.13 | 1500 | 0.4916 | 0.7412 | 0.7152 | | 0.4798 | 0.18 | 2000 | 0.4679 | 0.7549 | 0.7221 | | 0.4728 | 0.22 | 2500 | 0.4563 | 0.7624 | 0.7270 | | 0.4569 | 0.26 | 3000 | 0.4501 | 0.7673 | 0.7340 | | 0.4583 | 0.31 | 3500 | 0.4480 | 0.7682 | 0.7375 | | 0.4502 | 0.35 | 4000 | 0.4498 | 0.7665 | 0.7387 | | 0.4514 | 0.4 | 4500 | 0.4452 | 0.7681 | 0.7410 | | 0.4416 | 0.44 | 5000 | 0.4209 | 0.7884 | 0.7491 | | 0.4297 | 0.48 | 5500 | 0.4288 | 0.7826 | 0.7502 | | 0.4299 | 0.53 | 6000 | 0.4069 | 0.8001 | 0.7559 | | 0.4248 | 0.57 | 6500 | 0.4194 | 0.7896 | 0.7547 | | 0.4257 | 0.62 | 7000 | 0.4063 | 0.7998 | 0.7582 | | 0.418 | 0.66 | 7500 | 0.4059 | 0.8038 | 0.7639 | | 0.4306 | 0.7 | 8000 | 0.4111 | 0.7964 | 0.7615 | | 0.4212 | 0.75 | 8500 | 0.3990 | 0.8065 | 0.7672 | | 0.4143 | 0.79 | 9000 | 0.4227 | 0.7875 | 0.7604 | | 0.4121 | 0.84 | 9500 | 0.3906 | 0.8098 | 0.7667 | | 0.4138 | 0.88 | 10000 | 0.3872 | 0.8152 | 0.7725 | | 0.4082 | 0.92 | 10500 | 0.3843 | 0.8148 | 0.7700 | | 0.4084 | 0.97 | 11000 | 0.3863 | 0.8170 | 0.7740 | | 0.4067 | 1.01 | 11500 | 0.4001 | 0.8037 | 0.7707 | | 0.3854 | 1.06 | 12000 | 0.3814 | 0.8182 | 0.7756 | | 0.3945 | 1.1 | 12500 | 0.3861 | 0.8132 | 0.7761 | | 0.3831 | 1.14 | 13000 | 0.3917 | 0.8110 | 0.7750 | | 0.3722 | 1.19 | 13500 | 0.4096 | 0.7995 | 0.7718 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
078e1139e604577f1d605f71d71c8bb6
Devarshi/Brain_Tumor_Classification_using_swin
Devarshi
swin
14
4
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,689
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Brain_Tumor_Classification_using_swin This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0123 - Accuracy: 0.9961 - F1: 0.9961 - Recall: 0.9961 - Precision: 0.9961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.1234 | 1.0 | 180 | 0.0450 | 0.9840 | 0.9840 | 0.9840 | 0.9840 | | 0.0837 | 2.0 | 360 | 0.0198 | 0.9926 | 0.9926 | 0.9926 | 0.9926 | | 0.0373 | 3.0 | 540 | 0.0123 | 0.9961 | 0.9961 | 0.9961 | 0.9961 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
597a64415e7bbc8e8f1da5dd7012d539
Helsinki-NLP/opus-mt-ln-de
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-ln-de * source languages: ln * target languages: de * OPUS readme: [ln-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ln.de | 23.3 | 0.428 |
001975a4bd28c75dc6ea738a2d691d1e
roscazo/CTEBMSP_ner_test
roscazo
roberta
14
1
transformers
0
token-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,256
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CTEBMSP_ner_test This model is a fine-tuned version of [chizhikchi/Spanish_disease_finder](https://huggingface.co/chizhikchi/Spanish_disease_finder) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0560 - Diso Precision: 0.8925 - Diso Recall: 0.8945 - Diso F1: 0.8935 - Diso Number: 2645 - Overall Precision: 0.8925 - Overall Recall: 0.8945 - Overall F1: 0.8935 - Overall Accuracy: 0.9899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.04 | 1.0 | 1570 | 0.0439 | 0.8410 | 0.8858 | 0.8628 | 2645 | 0.8410 | 0.8858 | 0.8628 | 0.9877 | | 0.0173 | 2.0 | 3140 | 0.0487 | 0.8728 | 0.8843 | 0.8785 | 2645 | 0.8728 | 0.8843 | 0.8785 | 0.9885 | | 0.0071 | 3.0 | 4710 | 0.0496 | 0.8911 | 0.8945 | 0.8928 | 2645 | 0.8911 | 0.8945 | 0.8928 | 0.9898 | | 0.0025 | 4.0 | 6280 | 0.0560 | 0.8925 | 0.8945 | 0.8935 | 2645 | 0.8925 | 0.8945 | 0.8935 | 0.9899 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
56a5a88f2c267d0e3c5bb1db9e87aff9
jlondonobo/whisper-large-v2-es
jlondonobo
whisper
22
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,344
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V2 Spanish This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 es dataset. It achieves the following results on the evaluation set: - Loss: 0.1648 - Wer: 5.0745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1556 | 0.5 | 750 | 0.1683 | 5.0959 | | 0.1732 | 1.35 | 1500 | 0.1648 | 5.0745 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
f2a506b9a80db684765e3ddd49fa1e9c
WillHeld/t5-base-pointer-top_v2
WillHeld
mt5
17
3
transformers
0
text2text-generation
true
false
false
apache-2.0
['en']
['top_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,189
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-pointer-top_v2 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the top_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.0256 - Exact Match: 0.8517 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 128 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 1.4545 | 0.82 | 200 | 0.2542 | 0.1294 | | 0.1878 | 1.65 | 400 | 0.0668 | 0.2128 | | 0.0796 | 2.47 | 600 | 0.0466 | 0.2276 | | 0.0536 | 3.29 | 800 | 0.0356 | 0.2309 | | 0.0424 | 4.12 | 1000 | 0.0317 | 0.2328 | | 0.0356 | 4.94 | 1200 | 0.0295 | 0.2340 | | 0.0306 | 5.76 | 1400 | 0.0288 | 0.2357 | | 0.0277 | 6.58 | 1600 | 0.0271 | 0.2351 | | 0.0243 | 7.41 | 1800 | 0.0272 | 0.2351 | | 0.0225 | 8.23 | 2000 | 0.0272 | 0.2353 | | 0.0206 | 9.05 | 2200 | 0.0267 | 0.2368 | | 0.0187 | 9.88 | 2400 | 0.0260 | 0.2367 | | 0.0173 | 10.7 | 2600 | 0.0256 | 0.2383 | | 0.0161 | 11.52 | 2800 | 0.0260 | 0.2383 | | 0.0153 | 12.35 | 3000 | 0.0257 | 0.2377 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
aa5aa6b5874269fa3e0ab64ea7843fa2
lgris/distilxlsr_bp_16-24
lgris
wav2vec2
5
2
transformers
0
feature-extraction
true
false
false
apache-2.0
['pt']
null
null
0
0
0
0
0
0
0
['speech']
false
true
true
2,106
false
# DistilXLSR-53 for BP [DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900) Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee **Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)). **Abstract** Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech. # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
b79f6ab2045a389fb3c38a9d42b70133
fathyshalab/domain_transfer_clinic_credit_cards-massive_iot-roberta-large-v1-2-6
fathyshalab
roberta
14
0
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,526
false
# fathyshalab/domain_transfer_clinic_credit_cards-massive_iot-roberta-large-v1-2-6 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_iot-roberta-large-v1-2-6") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
f5bd1d5a02045d3c6eb3e3f5d115c09e
Nadav/bert-base-historic-multilingual-cased-squad-fr
Nadav
bert
10
7
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,307
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-historic-multilingual-cased-squad-fr This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9769 | 1.0 | 3660 | 1.8046 | | 1.6309 | 2.0 | 7320 | 1.7001 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
66b23cac7169e85ead8f19e974e1006c
jogonba2/barthez-deft-linguistique
jogonba2
mbart
14
4
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
3,620
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # barthez-deft-linguistique This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset. **Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms) It achieves the following results on the evaluation set: - Loss: 1.7596 - Rouge1: 41.989 - Rouge2: 22.4524 - Rougel: 32.7966 - Rougelsum: 32.7953 - Gen Len: 22.1549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 3.0569 | 1.0 | 108 | 2.0282 | 31.6993 | 14.9483 | 25.5565 | 25.4379 | 18.3803 | | 2.2892 | 2.0 | 216 | 1.8553 | 35.2563 | 18.019 | 28.3135 | 28.2927 | 18.507 | | 1.9062 | 3.0 | 324 | 1.7696 | 37.4613 | 18.1488 | 28.9959 | 29.0134 | 19.5352 | | 1.716 | 4.0 | 432 | 1.7641 | 37.6903 | 18.7496 | 30.1097 | 30.1027 | 18.9577 | | 1.5722 | 5.0 | 540 | 1.7781 | 38.1013 | 19.8291 | 29.8142 | 29.802 | 19.169 | | 1.4655 | 6.0 | 648 | 1.7661 | 38.3557 | 20.3309 | 30.5068 | 30.4728 | 19.3662 | | 1.3507 | 7.0 | 756 | 1.7596 | 39.7409 | 20.2998 | 31.0849 | 31.1152 | 19.3944 | | 1.2874 | 8.0 | 864 | 1.7706 | 37.7846 | 20.3457 | 30.6826 | 30.6321 | 19.4789 | | 1.2641 | 9.0 | 972 | 1.7848 | 38.7421 | 19.5701 | 30.5798 | 30.6305 | 19.3944 | | 1.1192 | 10.0 | 1080 | 1.8008 | 40.3313 | 20.3378 | 31.8325 | 31.8648 | 19.5493 | | 1.0724 | 11.0 | 1188 | 1.8450 | 38.9612 | 20.5719 | 31.4496 | 31.3144 | 19.8592 | | 1.0077 | 12.0 | 1296 | 1.8364 | 36.5997 | 18.46 | 29.1808 | 29.1705 | 19.7324 | | 0.9362 | 13.0 | 1404 | 1.8677 | 38.0371 | 19.2321 | 30.3893 | 30.3926 | 19.6338 | | 0.8868 | 14.0 | 1512 | 1.9154 | 36.4737 | 18.5314 | 29.325 | 29.3634 | 19.6479 | | 0.8335 | 15.0 | 1620 | 1.9344 | 35.7583 | 18.0687 | 27.9666 | 27.8675 | 19.8028 | | 0.8305 | 16.0 | 1728 | 1.9556 | 37.2137 | 18.2199 | 29.5959 | 29.5799 | 19.9577 | | 0.8057 | 17.0 | 1836 | 1.9793 | 36.6834 | 17.8505 | 28.6701 | 28.7145 | 19.7324 | | 0.7869 | 18.0 | 1944 | 1.9994 | 37.5918 | 19.1984 | 28.8569 | 28.8278 | 19.7606 | | 0.7549 | 19.0 | 2052 | 2.0117 | 37.3278 | 18.5169 | 28.778 | 28.7737 | 19.8028 | | 0.7497 | 20.0 | 2160 | 2.0189 | 37.7513 | 19.1813 | 29.3675 | 29.402 | 19.6901 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1+cu110 - Datasets 1.11.0 - Tokenizers 0.10.3
47491d529097e7f8cc60f1c65e481b1a
oskarandrsson/mt-hr-sv-finetuned
oskarandrsson
marian
11
17
transformers
1
translation
true
false
false
apache-2.0
['hr', 'sv']
null
null
1
1
0
0
0
0
0
['generated_from_trainer', 'translation']
true
true
true
1,171
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt-hr-sv-finetuned This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hr-sv](https://huggingface.co/Helsinki-NLP/opus-mt-hr-sv) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9565 - eval_bleu: 49.8248 - eval_runtime: 873.8605 - eval_samples_per_second: 16.982 - eval_steps_per_second: 4.246 - epoch: 5.0 - step: 27825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 24 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.6.1 - Tokenizers 0.13.1
d73d773121398016c35e75943cc6e40c
speechbrain/tts-tacotron2-ljspeech
speechbrain
null
5
2,736
speechbrain
38
text-to-speech
false
false
false
apache-2.0
['en']
['LJSpeech']
null
5
1
4
0
1
0
1
['text-to-speech', 'TTS', 'speech-synthesis', 'Tacotron2', 'speechbrain']
false
true
true
3,726
false
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Text-to-Speech (TTS) with Tacotron2 trained on LJSpeech This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Tacotron2](https://arxiv.org/abs/1712.05884) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram. ## Install SpeechBrain ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Text-to-Speech (TTS) ``` import torchaudio from speechbrain.pretrained import Tacotron2 from speechbrain.pretrained import HIFIGAN # Intialize TTS (tacotron2) and Vocoder (HiFIGAN) tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir="tmpdir_tts") hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder") # Running the TTS mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb") # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_output) # Save the waverform torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050) ``` If you want to generate multiple sentences in one-shot, you can do in this way: ``` from speechbrain.pretrained import Tacotron2 tacotron2 = Tacotron2.from_hparams(source="speechbrain/TTS_Tacotron2", savedir="tmpdir") items = [ "A quick brown fox jumped over the lazy dog", "How much wood would a woodchuck chuck?", "Never odd or even" ] mel_outputs, mel_lengths, alignments = tacotron2.encode_batch(items) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LJSpeech/TTS/tacotron2/ python train.py --device=cuda:0 --max_grad_norm=1.0 --data_folder=/your_folder/LJSpeech-1.1 hparams/train.yaml ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1PKju-_Nal3DQqd-n0PsaHK-bVIOlbf26?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
3e3c631a1ed39cdce7bbadae68a8394d
eugenecamus/distilbert-imdb-demo
eugenecamus
distilbert
37
6
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,498
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-imdb-demo This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4328 - Accuracy: 0.928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3459 | 1.0 | 2657 | 0.2362 | 0.9091 | | 0.1612 | 2.0 | 5314 | 0.2668 | 0.9248 | | 0.0186 | 3.0 | 7971 | 0.3274 | 0.9323 | | 0.1005 | 4.0 | 10628 | 0.3978 | 0.9277 | | 0.0006 | 5.0 | 13285 | 0.4328 | 0.928 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu102 - Datasets 2.2.1 - Tokenizers 0.12.1
a02871b8c516c9c5e2f08c79bd213be0
darkvibes/vibes-2-checkpoint-1
darkvibes
null
18
6
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
625
false
### VIBES-2,-Checkpoint-1 Dreambooth model trained by darkvibes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
c7b585af7d8cd8bf51c19920090d60bd
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-xas3-ntsema-colab
ntsema
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['audiofolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,516
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-espeak-cv-ft-xas3-ntsema-colab This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 4.3037 - Wer: 0.9713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.2912 | 9.09 | 400 | 3.9091 | 1.0 | | 2.5952 | 18.18 | 800 | 3.8703 | 0.9959 | | 2.3509 | 27.27 | 1200 | 4.3037 | 0.9713 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
b9d13ac0b6985f4f7be8f9c777a3dc16
Helsinki-NLP/opus-mt-bg-it
Helsinki-NLP
marian
11
29
transformers
0
translation
true
true
false
apache-2.0
['bg', 'it']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
1,990
false
### bul-ita * source group: Bulgarian * target group: Italian * OPUS readme: [bul-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md) * model: transformer * source language(s): bul * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.ita | 43.1 | 0.653 | ### System Info: - hf_name: bul-ita - source_languages: bul - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'it'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: ita - short_pair: bg-it - chrF2_score: 0.653 - bleu: 43.1 - brevity_penalty: 0.987 - ref_len: 16951.0 - src_name: Bulgarian - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: it - prefer_old: False - long_pair: bul-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
14905d68f94ffc39354c000f3b358901
Maniac/wav2vec2-xls-r-60-urdu
Maniac
wav2vec2
19
9
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
true
true
true
1,536
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 3.8433 - Wer: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 1.468 | 166.67 | 500 | 3.0262 | 1.0035 | | 0.0572 | 333.33 | 1000 | 3.5352 | 0.9721 | | 0.0209 | 500.0 | 1500 | 3.7266 | 0.9834 | | 0.0092 | 666.67 | 2000 | 3.8433 | 0.9852 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
6b0800a30b936fb00ba5e277ddb991b3
deepset/gelectra-base-germanquad-distilled
deepset
electra
8
353
transformers
1
question-answering
true
false
false
mit
['de']
['deepset/germanquad']
null
0
0
0
0
0
0
0
['exbert']
false
true
true
3,035
false
![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg) ## Overview **Language model:** gelectra-base-germanquad-distilled **Language:** German **Training data:** GermanQuAD train set (~ 12MB) **Eval data:** GermanQuAD test set (~ 5MB) **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - We trained a German question answering model with a gelectra-base model as its basis. - The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad). - The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers. - In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model. See https://deepset.ai/germanquad for more details and dataset download in SQuAD format. ## Hyperparameters ``` batch_size = 24 n_epochs = 6 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 2 distillation_loss_weight = 0.75 ``` ## Performance We evaluated the extractive question answering performance on our GermanQuAD test set. Model types and training data are included in the model name. For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset. The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad. The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. ``` "exact": 62.4773139745916 "f1": 80.9488017070188 ``` ![performancetable](https://lh3.google.com/u/0/d/1IFqkq8OZ7TFnGzxmW6eoxXSYa12f2M7O=w1970-h1546-iv1) ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
a03b5c805aad953e696a62ee8f3ad1bd
shishirAI/wav2vec2-xlsr-nepalii
shishirAI
wav2vec2
12
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,045
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-nepalii This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.12.1
dc669d02b947137b0ff1a353aa453135
jonatasgrosman/exp_w2v2r_de_xls-r_age_teens-10_sixties-0_s380
jonatasgrosman
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
476
false
# exp_w2v2r_de_xls-r_age_teens-10_sixties-0_s380 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6963a30ccbe0e8bea25668265236ea56
globuslabs/ScholarBERT_100_WB
globuslabs
bert
8
2
transformers
0
fill-mask
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['science', 'multi-displinary']
false
true
true
2,021
false
# ScholarBERT_100_WB Model This is the **ScholarBERT_100_WB** variant of the ScholarBERT model family. The model is pretrained on a large collection of scientific research articles (**221B tokens**). Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models. This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default. The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters. # Model Architecture | Hyperparameter | Value | |-----------------|:-------:| | Layers | 24 | | Hidden Size | 1024 | | Attention Heads | 16 | | Total Parameters | 340M | # Training Dataset The vocab and the model are pertrained on **100% of the PRD** scientific literature dataset and the Wikipedia+BookCorpus. The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”), a nonprofit organization based in California. This dataset was constructed from a corpus of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals. The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology. The distribution of articles is shown below. ![corpus pie chart](https://huggingface.co/globuslabs/ScholarBERT/resolve/main/corpus_pie_chart.png) # BibTeX entry and citation info If using this model, please cite this paper: ``` @misc{hong2022scholarbert, doi = {10.48550/ARXIV.2205.11342}, url = {https://arxiv.org/abs/2205.11342}, author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian}, title = {ScholarBERT: Bigger is Not Always Better}, publisher = {arXiv}, year = {2022} } ```
ff36490babdd99dfa924ba5f2916d34f
Helsinki-NLP/opus-mt-ja-en
Helsinki-NLP
marian
10
30,287
transformers
14
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
1
0
1
['translation']
false
true
true
770
false
### opus-mt-ja-en * source languages: ja * target languages: en * OPUS readme: [ja-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.en | 41.7 | 0.589 |
83d454bdde700be4474592888faf4f22
sentence-transformers/msmarco-distilbert-base-tas-b
sentence-transformers
distilbert
13
37,022
sentence-transformers
5
sentence-similarity
true
true
false
apache-2.0
['en']
['ms_marco']
null
1
0
1
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,802
false
# sentence-transformers/msmarco-distilbert-base-tas-b This is a port of the [DistilBert TAS-B Model](https://huggingface.co/sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and is optimized for the task of semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer, util query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] #Load the model model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-tas-b') #Encode query and documents query_emb = model.encode(query) doc_emb = model.encode(docs) #Compute dot score between query and all document embeddings scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #CLS Pooling - Take output from first token def cls_pooling(model_output): return model_output.last_hidden_state[:,0] #Encode text def encode(texts): # Tokenize sentences encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input, return_dict=True) # Perform pooling embeddings = cls_pooling(model_output) return embeddings # Sentences we want sentence embeddings for query = "How many people live in London?" docs = ["Around 9 Million people live in London", "London is known for its financial district"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b") model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b") #Encode query and docs query_emb = encode(query) doc_emb = encode(docs) #Compute dot score between query and all document embeddings scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist() #Combine docs & scores doc_score_pairs = list(zip(docs, scores)) #Sort by decreasing score doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True) #Output passages & scores for doc, score in doc_score_pairs: print(score, doc) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-tas-b) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors Have a look at: [DistilBert TAS-B Model](https://huggingface.co/sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco)
24a955564d6ec89152de9c30c629b554
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm1000
dminiotas05
distilbert
14
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,723
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft1500_norm1000 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0875 - Mse: 1.3594 - Mae: 0.5794 - R2: 0.3573 - Accuracy: 0.7015 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:--------:| | 0.8897 | 1.0 | 3122 | 1.0463 | 1.3078 | 0.5936 | 0.3817 | 0.7008 | | 0.7312 | 2.0 | 6244 | 1.0870 | 1.3588 | 0.5796 | 0.3576 | 0.7002 | | 0.5348 | 3.0 | 9366 | 1.1056 | 1.3820 | 0.5786 | 0.3467 | 0.7124 | | 0.3693 | 4.0 | 12488 | 1.0866 | 1.3582 | 0.5854 | 0.3579 | 0.7053 | | 0.2848 | 5.0 | 15610 | 1.0875 | 1.3594 | 0.5794 | 0.3573 | 0.7015 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
a984d17064552475f532b31dfd7f282c