pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-epochs15 This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs10) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs15", "results": []}]}
AKulk/wav2vec2-base-timit-epochs15
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-base-timit-epochs15 This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
[ "# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ 47, 50, 7, 9, 9, 4, 133, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-epochs5 This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs5", "results": []}]}
AKulk/wav2vec2-base-timit-epochs5
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
# wav2vec2-base-timit-epochs5 This model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
[ "# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
[ 47, 52, 7, 9, 9, 4, 133, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 5\n- total_train_batch_size: 80\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 5\n- mixed_precision_training: Native AMP### Training results### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3" ]
summarization
transformers
# summarization_fanpage128 This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on Fanpage dataset for Abstractive Summarization. It achieves the following results: - Loss: 1.5348 - Rouge1: 34.1882 - Rouge2: 15.7866 - Rougel: 25.141 - Rougelsum: 28.4882 - Gen Len: 69.3041 ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-fanpage-128") model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-fanpage-128") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3 # Citation More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
{"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_fanpage128", "results": []}]}
ARTeLab/it5-summarization-fanpage
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "summarization", "it", "dataset:ARTeLab/fanpage", "base_model:gsarti/it5-base", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# summarization_fanpage128 This model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization. It achieves the following results: - Loss: 1.5348 - Rouge1: 34.1882 - Rouge2: 15.7866 - Rougel: 25.141 - Rougelsum: 28.4882 - Gen Len: 69.3041 ## Usage ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3 More details and results in published work
[ "# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.5348\n- Rouge1: 34.1882\n- Rouge2: 15.7866\n- Rougel: 25.141\n- Rougelsum: 28.4882\n- Gen Len: 69.3041", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.5348\n- Rouge1: 34.1882\n- Rouge2: 15.7866\n- Rougel: 25.141\n- Rougelsum: 28.4882\n- Gen Len: 69.3041", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ 73, 92, 3, 95, 54 ]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.5348\n- Rouge1: 34.1882\n- Rouge2: 15.7866\n- Rougel: 25.141\n- Rougelsum: 28.4882\n- Gen Len: 69.3041## Usage### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0### Framework versions\n\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
summarization
transformers
# summarization_ilpost This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on IlPost dataset for Abstractive Summarization. It achieves the following results: - Loss: 1.6020 - Rouge1: 33.7802 - Rouge2: 16.2953 - Rougel: 27.4797 - Rougelsum: 30.2273 - Gen Len: 45.3175 ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-ilpost") model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-ilpost") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_ilpost", "results": []}]}
ARTeLab/it5-summarization-ilpost
null
[ "transformers", "pytorch", "tensorboard", "safetensors", "t5", "text2text-generation", "summarization", "it", "dataset:ARTeLab/ilpost", "base_model:gsarti/it5-base", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# summarization_ilpost This model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization. It achieves the following results: - Loss: 1.6020 - Rouge1: 33.7802 - Rouge2: 16.2953 - Rougel: 27.4797 - Rougelsum: 30.2273 - Gen Len: 45.3175 ## Usage ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
[ "# summarization_ilpost\n\nThis model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.6020\n- Rouge1: 33.7802\n- Rouge2: 16.2953\n- Rougel: 27.4797\n- Rougelsum: 30.2273\n- Gen Len: 45.3175", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# summarization_ilpost\n\nThis model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.6020\n- Rouge1: 33.7802\n- Rouge2: 16.2953\n- Rougel: 27.4797\n- Rougelsum: 30.2273\n- Gen Len: 45.3175", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3" ]
[ 76, 92, 3, 95, 47 ]
[ "TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_ilpost\n\nThis model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.6020\n- Rouge1: 33.7802\n- Rouge2: 16.2953\n- Rougel: 27.4797\n- Rougelsum: 30.2273\n- Gen Len: 45.3175## Usage### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0### Framework versions\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3" ]
summarization
transformers
# summarization_mlsum This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on MLSum-it for Abstractive Summarization. It achieves the following results: - Loss: 2.0190 - Rouge1: 19.3739 - Rouge2: 5.9753 - Rougel: 16.691 - Rougelsum: 16.7862 - Gen Len: 32.5268 ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-mlsum") model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-mlsum") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3 # Citation More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
{"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_mlsum", "results": []}]}
ARTeLab/it5-summarization-mlsum
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "summarization", "it", "dataset:ARTeLab/mlsum-it", "base_model:gsarti/it5-base", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# summarization_mlsum This model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization. It achieves the following results: - Loss: 2.0190 - Rouge1: 19.3739 - Rouge2: 5.9753 - Rougel: 16.691 - Rougelsum: 16.7862 - Gen Len: 32.5268 ## Usage ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3 More details and results in published work
[ "# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.0190\n- Rouge1: 19.3739\n- Rouge2: 5.9753\n- Rougel: 16.691\n- Rougelsum: 16.7862\n- Gen Len: 32.5268", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.0190\n- Rouge1: 19.3739\n- Rouge2: 5.9753\n- Rougel: 16.691\n- Rougelsum: 16.7862\n- Gen Len: 32.5268", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ 75, 93, 3, 95, 54 ]
[ "TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.0190\n- Rouge1: 19.3739\n- Rouge2: 5.9753\n- Rougel: 16.691\n- Rougelsum: 16.7862\n- Gen Len: 32.5268## Usage### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 6\n- eval_batch_size: 6\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0### Framework versions\n\n- Transformers 4.12.0.dev0\n- Pytorch 1.9.1+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
summarization
transformers
# mbart-summarization-fanpage This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on Fanpage dataset for Abstractive Summarization. It achieves the following results: - Loss: 2.1833 - Rouge1: 36.5027 - Rouge2: 17.4428 - Rougel: 26.1734 - Rougelsum: 30.2636 - Gen Len: 75.2413 ## Usage ```python from transformers import MBartTokenizer, MBartForConditionalGeneration tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-fanpage") model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-fanpage") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3 # Citation More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
{"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_fanpage4epoch", "results": []}]}
ARTeLab/mbart-summarization-fanpage
null
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "summarization", "it", "dataset:ARTeLab/fanpage", "base_model:facebook/mbart-large-cc25", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
# mbart-summarization-fanpage This model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization. It achieves the following results: - Loss: 2.1833 - Rouge1: 36.5027 - Rouge2: 17.4428 - Rougel: 26.1734 - Rougelsum: 30.2636 - Gen Len: 75.2413 ## Usage ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3 More details and results in published work
[ "# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.1833\n- Rouge1: 36.5027\n- Rouge2: 17.4428\n- Rougel: 26.1734\n- Rougelsum: 30.2636\n- Gen Len: 75.2413", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ "TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.1833\n- Rouge1: 36.5027\n- Rouge2: 17.4428\n- Rougel: 26.1734\n- Rougelsum: 30.2636\n- Gen Len: 75.2413", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ 68, 93, 3, 95, 54 ]
[ "TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.1833\n- Rouge1: 36.5027\n- Rouge2: 17.4428\n- Rougel: 26.1734\n- Rougelsum: 30.2636\n- Gen Len: 75.2413## Usage### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
summarization
transformers
# mbart_summarization_ilpost This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on IlPost dataset for Abstractive Summarization. It achieves the following results: - Loss: 2.3640 - Rouge1: 38.9101 - Rouge2: 21.384 - Rougel: 32.0517 - Rougelsum: 35.0743 - Gen Len: 39.8843 ## Usage ```python from transformers import MBartTokenizer, MBartForConditionalGeneration tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-ilpost") model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-ilpost") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3 # Citation More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
{"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_ilpost", "results": []}]}
ARTeLab/mbart-summarization-ilpost
null
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "summarization", "it", "dataset:ARTeLab/ilpost", "base_model:facebook/mbart-large-cc25", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
# mbart_summarization_ilpost This model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization. It achieves the following results: - Loss: 2.3640 - Rouge1: 38.9101 - Rouge2: 21.384 - Rougel: 32.0517 - Rougelsum: 35.0743 - Gen Len: 39.8843 ## Usage ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3 More details and results in published work
[ "# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.3640\n- Rouge1: 38.9101\n- Rouge2: 21.384\n- Rougel: 32.0517\n- Rougelsum: 35.0743\n- Gen Len: 39.8843", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ "TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.3640\n- Rouge1: 38.9101\n- Rouge2: 21.384\n- Rougel: 32.0517\n- Rougelsum: 35.0743\n- Gen Len: 39.8843", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ 68, 95, 3, 95, 54 ]
[ "TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.3640\n- Rouge1: 38.9101\n- Rouge2: 21.384\n- Rougel: 32.0517\n- Rougelsum: 35.0743\n- Gen Len: 39.8843## Usage### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
summarization
transformers
# mbart_summarization_mlsum This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on mlsum-it for Abstractive Summarization. It achieves the following results: - Loss: 3.3336 - Rouge1: 19.3489 - Rouge2: 6.4028 - Rougel: 16.3497 - Rougelsum: 16.5387 - Gen Len: 33.5945 ## Usage ```python from transformers import MBartTokenizer, MBartForConditionalGeneration tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-mlsum") model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-mlsum") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3 # Citation More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228) ``` @Article{info13050228, AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo}, TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization}, JOURNAL = {Information}, VOLUME = {13}, YEAR = {2022}, NUMBER = {5}, ARTICLE-NUMBER = {228}, URL = {https://www.mdpi.com/2078-2489/13/5/228}, ISSN = {2078-2489}, ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.}, DOI = {10.3390/info13050228} } ```
{"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_mlsum", "results": []}]}
ARTeLab/mbart-summarization-mlsum
null
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "summarization", "it", "dataset:ARTeLab/mlsum-it", "base_model:facebook/mbart-large-cc25", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
# mbart_summarization_mlsum This model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization. It achieves the following results: - Loss: 3.3336 - Rouge1: 19.3489 - Rouge2: 6.4028 - Rougel: 16.3497 - Rougelsum: 16.5387 - Gen Len: 33.5945 ## Usage ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3 More details and results in published work
[ "# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 3.3336\n- Rouge1: 19.3489\n- Rouge2: 6.4028\n- Rougel: 16.3497\n- Rougelsum: 16.5387\n- Gen Len: 33.5945", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ "TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 3.3336\n- Rouge1: 19.3489\n- Rouge2: 6.4028\n- Rougel: 16.3497\n- Rougelsum: 16.5387\n- Gen Len: 33.5945", "## Usage", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0", "### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
[ 70, 98, 3, 95, 54 ]
[ "TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 3.3336\n- Rouge1: 19.3489\n- Rouge2: 6.4028\n- Rougel: 16.3497\n- Rougelsum: 16.5387\n- Gen Len: 33.5945## Usage### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4.0### Framework versions\n\n- Transformers 4.15.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.15.1\n- Tokenizers 0.10.3\n\nMore details and results in published work" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PENGMENGJIE-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.9.0 - Pytorch 1.7.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "PENGMENGJIE-finetuned-emotion", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]}
ASCCCCCCCC/PENGMENGJIE-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# PENGMENGJIE-finetuned-emotion This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.9.0 - Pytorch 1.7.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
[ "# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.7.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.7.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ 47, 37, 7, 9, 9, 4, 93, 42 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.7.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-finetuned-amazon_zh_20000 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1683 - Accuracy: 0.5224 - F1: 0.5194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.2051 | 1.0 | 2500 | 1.1717 | 0.506 | 0.4847 | | 1.0035 | 2.0 | 5000 | 1.1683 | 0.5224 | 0.5194 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-chinese-finetuned-amazon_zh_20000", "results": []}]}
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-base-chinese-finetuned-amazon\_zh\_20000 ============================================= This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1683 * Accuracy: 0.5224 * F1: 0.5194 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.9.1 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ 37, 101, 5, 40 ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-chinese-amazon_zh_20000 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1518 - Accuracy: 0.5092 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.196 | 1.0 | 1250 | 1.1518 | 0.5092 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-chinese-amazon_zh_20000", "results": []}]}
ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-chinese-amazon\_zh\_20000 ========================================= This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1518 * Accuracy: 0.5092 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.9.1 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ 37, 101, 5, 40 ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-amazon_zh_20000 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3031 - Accuracy: 0.4406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.396 | 1.0 | 1250 | 1.3031 | 0.4406 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-multilingual-cased-amazon_zh_20000", "results": []}]}
ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-multilingual-cased-amazon\_zh\_20000 ==================================================== This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3031 * Accuracy: 0.4406 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.9.1 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ 47, 101, 5, 40 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-amazon_zh_20000 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3516 - Accuracy: 0.414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4343 | 1.0 | 1250 | 1.3516 | 0.414 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-amazon_zh_20000", "results": []}]}
ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-amazon\_zh\_20000 =================================================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.3516 * Accuracy: 0.414 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.9.1 * Datasets 1.18.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
[ 47, 101, 5, 40 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.1\n* Datasets 1.18.3\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.9.0 - Pytorch 1.7.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]}
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.9.0 - Pytorch 1.7.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.7.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1", "### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.7.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ 47, 42, 7, 9, 9, 4, 93, 42 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 48\n- eval_batch_size: 48\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1### Framework versions\n\n- Transformers 4.9.0\n- Pytorch 1.7.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 80.0 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-wikitext2", "results": []}]}
AT/distilroberta-base-finetuned-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of distilroberta-base on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 80.0 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
[ "# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 80.0", "### Training results", "### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 80.0", "### Training results", "### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
[ 45, 40, 7, 9, 9, 4, 95, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 80.0### Training results### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3" ]
text-generation
transformers
#Harry Potter DialoGPT Model
{"tags": ["conversational"]}
ATGdev/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Harry Potter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # result This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]}
AVSilva/bertimbau-large-fine-tuned-md
null
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
# result This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7458", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7458", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 38, 46, 7, 9, 9, 4, 95, 5, 47 ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7458## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # result This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7570 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]}
AVSilva/bertimbau-large-fine-tuned-sd
null
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
# result This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7570 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7570", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7570", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ 38, 46, 7, 9, 9, 4, 95, 5, 47 ]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7570## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0### Training results### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
text-generation
transformers
#Tony Stark DialoGPT model
{"tags": ["conversational"]}
AVeryRealHuman/DialoGPT-small-TonyStark
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
#Tony Stark DialoGPT model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
[ 43 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_znj9o4r This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "tmp_znj9o4r", "results": []}]}
AWTStress/stress_classifier
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
# tmp_znj9o4r This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n", "# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ 38, 34, 7, 9, 9, 4, 32, 5, 38 ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32### Training results### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # stress_score This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "stress_score", "results": []}]}
AWTStress/stress_score
null
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
# stress_score This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n", "# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
[ 38, 27, 7, 9, 9, 4, 32, 5, 38 ]
[ "TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32### Training results### Framework versions\n\n- Transformers 4.16.2\n- TensorFlow 2.8.0\n- Datasets 1.18.3\n- Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4812 - Wer: 0.3557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4668 | 4.0 | 500 | 1.3753 | 0.9895 | | 0.6126 | 8.0 | 1000 | 0.4809 | 0.4350 | | 0.2281 | 12.0 | 1500 | 0.4407 | 0.4033 | | 0.1355 | 16.0 | 2000 | 0.4590 | 0.3765 | | 0.0923 | 20.0 | 2500 | 0.4754 | 0.3707 | | 0.0654 | 24.0 | 3000 | 0.4719 | 0.3557 | | 0.0489 | 28.0 | 3500 | 0.4812 | 0.3557 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Pinwheel/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.4812 * Wer: 0.3557 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ 47, 128, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
image-classification
null
#FashionMNIST PyTorch Quick Start
{"tags": ["image-classification", "pytorch", "huggingpics", "some_thing"], "metrics": ["accuracy"], "private": false}
Ab0/foo-model
null
[ "pytorch", "image-classification", "huggingpics", "some_thing", "model-index", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #pytorch #image-classification #huggingpics #some_thing #model-index #region-us
#FashionMNIST PyTorch Quick Start
[]
[ "TAGS\n#pytorch #image-classification #huggingpics #some_thing #model-index #region-us \n" ]
[ 26 ]
[ "TAGS\n#pytorch #image-classification #huggingpics #some_thing #model-index #region-us \n" ]
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co/bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co/bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co/Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co/Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co/Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co/Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-base-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis ============================================================= These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 50 ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co/bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co/bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co/Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co/Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co/Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co/Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-large-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis ============================================================= These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 50 ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co/bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co/bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co/Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co/Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co/Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co/Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-medium-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis ============================================================= These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 50 ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comments) dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: | Model Version | No. of Parameters | Training Time | F1-Score | Accuracy | | ------------------- | ----------------- | -------------- | -------- | -------- | | LSTM | ~4 M | 3 min | 0.7399 | 0.7445 | | Bi-LSTM | ~4.3 M | 6 min 35 s | 0.7380 | 0.7437 | | [BERT Base](https://huggingface.co/bert-base-uncased) | ~109.5 M | 33 min 20 s | 0.6979 | 0.7500 | | [BERT Large](https://huggingface.co/bert-large-uncased) | ~335.1 M | 1 h 50 min | 0.6976 | 0.7484 | | [BERT Arabic Mini](https://huggingface.co/Abdou/arabert-mini-algerian) | ~11.6 M | 2 min 40 s | 0.7057 | 0.7527 | | [BERT Arabic Medium](https://huggingface.co/Abdou/arabert-medium-algerian) | ~42.1 M | 11 min 25 s | 0.7521 | 0.7860 | | [BERT Arabic Base](https://huggingface.co/Abdou/arabert-base-algerian) | ~110.6 M | 34 min 19 s | 0.7688 | 0.8002 | | **[BERT Arabic Large](https://huggingface.co/Abdou/arabert-large-algerian)** | **~336.7 M** | **1 h 53 min** | **0.7838** | **0.8174** | # Citation If you find our work useful, please cite it as follows: ```bibtex @article{2023, title={Sentiment Analysis on Algerian Dialect with Transformers}, author={Zakaria Benmounah and Abdennour Boulesnane and Abdeladim Fadheli and Mustapha Khial}, journal={Applied Sciences}, volume={13}, number={20}, pages={11157}, year={2023}, month={Oct}, publisher={MDPI AG}, DOI={10.3390/app132011157}, ISSN={2076-3417}, url={http://dx.doi.org/10.3390/app132011157} } ```
{"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]}
Abdou/arabert-mini-algerian
null
[ "transformers", "pytorch", "bert", "text-classification", "ar", "dataset:Abdou/dz-sentiment-yt-comments", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis ============================================================= These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 50 ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
null
Model details available [here](https://github.com/awasthiabhijeet/PIE)
{}
AbhijeetA/PIE
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
Model details available here
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
#HarryPotter DialoGPT Model
{"tags": ["conversational"]}
AbhinavSaiTheGreat/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#HarryPotter DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 39 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
## Petrained Model BERT: base model (cased) BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English. ## Pretained Model Description BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives: - Masked language modeling (MLM) - Next sentence prediction (NSP) ## Fine-tuned Model Description: BERT fine-tuned Cola The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.  By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable ## How to use ? ###### Directly with a pipeline for a text-classification NLP task ```python from transformers import pipeline cola = pipeline('text-classification', model='Abirate/bert_fine_tuned_cola') cola("Tunisia is a beautiful country") [{'label': 'acceptable', 'score': 0.989352285861969}] ``` ###### Breaking down all the steps (Tokenization, Modeling, Postprocessing) ```python from transformers import AutoTokenizer, TFAutoModelForSequenceClassification import tensorflow as tf import numpy as np tokenizer = AutoTokenizer.from_pretrained('Abirate/bert_fine_tuned_cola') model = TFAutoModelForSequenceClassification.from_pretrained("Abirate/bert_fine_tuned_cola") text = "Tunisia is a beautiful country." encoded_input = tokenizer(text, return_tensors='tf') #The logits output = model(encoded_input) #Postprocessing probas_output = tf.math.softmax(tf.squeeze(output['logits']), axis = -1) class_preds = np.argmax(probas_output, axis = -1) #Predicting the class acceptable or not acceptable model.config.id2label[class_preds] #Result 'acceptable' ```
{}
Abirate/bert_fine_tuned_cola
null
[ "transformers", "tf", "bert", "text-classification", "arxiv:1810.04805", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[]
TAGS #transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
## Petrained Model BERT: base model (cased) BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. ## Pretained Model Description BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives: - Masked language modeling (MLM) - Next sentence prediction (NSP) ## Fine-tuned Model Description: BERT fine-tuned Cola The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.  By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable ## How to use ? ###### Directly with a pipeline for a text-classification NLP task ###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)
[ "## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.", "## Pretained Model Description\nBERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives:\n\n- Masked language modeling (MLM)\n- Next sentence prediction (NSP)", "## Fine-tuned Model Description: BERT fine-tuned Cola\nThe pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK. \n\nBy fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable", "## How to use ?", "###### Directly with a pipeline for a text-classification NLP task", "###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)" ]
[ "TAGS\n#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.", "## Pretained Model Description\nBERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives:\n\n- Masked language modeling (MLM)\n- Next sentence prediction (NSP)", "## Fine-tuned Model Description: BERT fine-tuned Cola\nThe pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK. \n\nBy fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable", "## How to use ?", "###### Directly with a pipeline for a text-classification NLP task", "###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)" ]
[ 40, 69, 87, 112, 6, 18, 22 ]
[ "TAGS\n#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.## Pretained Model Description\nBERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives:\n\n- Masked language modeling (MLM)\n- Next sentence prediction (NSP)## Fine-tuned Model Description: BERT fine-tuned Cola\nThe pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK. \n\nBy fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable## How to use ?###### Directly with a pipeline for a text-classification NLP task###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)" ]
text-generation
transformers
# jeff's 100% authorized brain scan
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-jefftastic
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# jeff's 100% authorized brain scan
[ "# jeff's 100% authorized brain scan" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# jeff's 100% authorized brain scan" ]
[ 39, 9 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# jeff's 100% authorized brain scan" ]
text-generation
transformers
# Mozark's Brain Uploaded to Hugging Face
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-mozark
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Mozark's Brain Uploaded to Hugging Face
[ "# Mozark's Brain Uploaded to Hugging Face" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Mozark's Brain Uploaded to Hugging Face" ]
[ 39, 11 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mozark's Brain Uploaded to Hugging Face" ]
text-generation
transformers
# Mozark's Brain Uploaded to Hugging Face but v2
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-mozarkv2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Mozark's Brain Uploaded to Hugging Face but v2
[ "# Mozark's Brain Uploaded to Hugging Face but v2" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Mozark's Brain Uploaded to Hugging Face but v2" ]
[ 39, 14 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mozark's Brain Uploaded to Hugging Face but v2" ]
text-generation
transformers
# Un Filtered brain upload of sinclair
{"tags": ["conversational"]}
AccurateIsaiah/DialoGPT-small-sinclair
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Un Filtered brain upload of sinclair
[ "# Un Filtered brain upload of sinclair" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Un Filtered brain upload of sinclair" ]
[ 39, 8 ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Un Filtered brain upload of sinclair" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2128 - Accuracy: 0.928 - F1: 0.9280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8151 | 1.0 | 250 | 0.3043 | 0.907 | 0.9035 | | 0.24 | 2.0 | 500 | 0.2128 | 0.928 | 0.9280 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.928, "name": "Accuracy"}, {"type": "f1", "value": 0.9280065074208208, "name": "F1"}]}]}]}
ActivationAI/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2128 * Accuracy: 0.928 * F1: 0.9280 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ 56, 101, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-anli_r3` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-anli_r3", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["anli"]}
AdapterHub/bert-base-uncased-pf-anli_r3
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:anli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-art` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-art", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["art"]}
AdapterHub/bert-base-uncased-pf-art
null
[ "adapter-transformers", "bert", "en", "dataset:art", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 30, 74, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-boolq` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-boolq", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]}
AdapterHub/bert-base-uncased-pf-boolq
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:qa/boolq", "en", "dataset:boolq", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 48, 80, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-cola` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cola", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:lingaccept/cola", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-cola
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:lingaccept/cola", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 41, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-commonsense_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-commonsense_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]}
AdapterHub/bert-base-uncased-pf-commonsense_qa
null
[ "adapter-transformers", "bert", "adapterhub:comsense/csqa", "en", "dataset:commonsense_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 46, 83, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-comqa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-comqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["com_qa"]}
AdapterHub/bert-base-uncased-pf-comqa
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:com_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 37, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-conll2000` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2000", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]}
AdapterHub/bert-base-uncased-pf-conll2000
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:chunk/conll2000", "en", "dataset:conll2000", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 49, 82, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]}
AdapterHub/bert-base-uncased-pf-conll2003
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:ner/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 50, 83, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-conll2003_pos", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/conll2003", "adapter-transformers"], "datasets": ["conll2003"]}
AdapterHub/bert-base-uncased-pf-conll2003_pos
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:pos/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 50, 86, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-copa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/copa", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-copa
null
[ "adapter-transformers", "bert", "adapterhub:comsense/copa", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 36, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cosmos_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]}
AdapterHub/bert-base-uncased-pf-cosmos_qa
null
[ "adapter-transformers", "bert", "adapterhub:comsense/cosmosqa", "en", "dataset:cosmos_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 45, 82, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cq", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/cq", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-cq
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/cq", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 40, 79, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-drop` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-drop", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["drop"]}
AdapterHub/bert-base-uncased-pf-drop
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:drop", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 34, 74, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_p` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_p", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]}
AdapterHub/bert-base-uncased-pf-duorc_p
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:duorc", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_s` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_s", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]}
AdapterHub/bert-base-uncased-pf-duorc_s
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:duorc", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emo", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emo"]}
AdapterHub/bert-base-uncased-pf-emo
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:emo", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 75, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-emotion` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-emotion", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emotion"]}
AdapterHub/bert-base-uncased-pf-emotion
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:emotion", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 34, 73, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-fce_error_detection` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-fce_error_detection", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]}
AdapterHub/bert-base-uncased-pf-fce_error_detection
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:ged/fce", "en", "dataset:fce_error_detection", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 50, 83, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-hellaswag` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-hellaswag", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]}
AdapterHub/bert-base-uncased-pf-hellaswag
null
[ "adapter-transformers", "bert", "adapterhub:comsense/hellaswag", "en", "dataset:hellaswag", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 47, 84, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-hotpotqa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-hotpotqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["hotpot_qa"]}
AdapterHub/bert-base-uncased-pf-hotpotqa
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:hotpot_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 38, 80, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-imdb` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-imdb", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]}
AdapterHub/bert-base-uncased-pf-imdb
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sentiment/imdb", "en", "dataset:imdb", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 45, 77, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-mit_movie_trivia` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mit_movie_trivia", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-mit_movie_trivia
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:ner/mit_movie_trivia", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 44, 87, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-mnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mnli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]}
AdapterHub/bert-base-uncased-pf-mnli
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/multinli", "en", "dataset:multi_nli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 49, 79, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mrpc", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/mrpc", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-mrpc
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sts/mrpc", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 39, 77, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-multirc` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-multirc", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "bert", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-multirc
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:rc/multirc", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 39, 77, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-newsqa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-newsqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["newsqa"]}
AdapterHub/bert-base-uncased-pf-newsqa
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:newsqa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 76, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-pmb_sem_tagging` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-pmb_sem_tagging", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:semtag/pmb", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-pmb_sem_tagging
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:semtag/pmb", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 41, 86, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qnli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/qnli", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-qnli
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/qnli", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 41, 80, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-qqp` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qqp", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "bert"]}
AdapterHub/bert-base-uncased-pf-qqp
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sts/qqp", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 40, 79, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quail", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quail"]}
AdapterHub/bert-base-uncased-pf-quail
null
[ "adapter-transformers", "bert", "en", "dataset:quail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 31, 76, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-quartz` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quartz", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quartz"]}
AdapterHub/bert-base-uncased-pf-quartz
null
[ "adapter-transformers", "bert", "en", "dataset:quartz", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 30, 74, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-quoref` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-quoref", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["quoref"]}
AdapterHub/bert-base-uncased-pf-quoref
null
[ "adapter-transformers", "bert", "question-answering", "en", "dataset:quoref", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 36, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-race", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["adapterhub:rc/race", "bert", "adapter-transformers"], "datasets": ["race"]}
AdapterHub/bert-base-uncased-pf-race
null
[ "adapter-transformers", "bert", "adapterhub:rc/race", "en", "dataset:race", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 39, 76, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-record", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:rc/record", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-record
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:rc/record", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 38, 75, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-rotten_tomatoes` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rotten_tomatoes", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]}
AdapterHub/bert-base-uncased-pf-rotten_tomatoes
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sentiment/rotten_tomatoes", "en", "dataset:rotten_tomatoes", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 47, 79, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-rte` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rte", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/rte", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-rte
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/rte", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 39, 76, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-scicite` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scicite", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["scicite"]}
AdapterHub/bert-base-uncased-pf-scicite
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:scicite", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 75, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-scitail", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]}
AdapterHub/bert-base-uncased-pf-scitail
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/scitail", "en", "dataset:scitail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 46, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-sick` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sick", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "adapter-transformers", "bert", "adapterhub:nli/sick"], "datasets": ["sick"]}
AdapterHub/bert-base-uncased-pf-sick
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:nli/sick", "en", "dataset:sick", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 44, 76, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-snli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-snli", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["snli"]}
AdapterHub/bert-base-uncased-pf-snli
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:snli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 36, 77, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-social_i_qa` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-social_i_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["social_i_qa"]}
AdapterHub/bert-base-uncased-pf-social_i_qa
null
[ "adapter-transformers", "bert", "en", "dataset:social_i_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 84, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]}
AdapterHub/bert-base-uncased-pf-squad
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/squad1", "en", "dataset:squad", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 45, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad_v2", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]}
AdapterHub/bert-base-uncased-pf-squad_v2
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/squad2", "en", "dataset:squad_v2", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 48, 81, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-sst2` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-sst2", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/sst-2", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-sst2
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sentiment/sst-2", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 41, 80, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-stsb` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-stsb", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/sts-b", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-stsb
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:sts/sts-b", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 40, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-swag` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-swag", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["swag"]}
AdapterHub/bert-base-uncased-pf-swag
null
[ "adapter-transformers", "bert", "en", "dataset:swag", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 31, 76, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-trec` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-trec", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["trec"]}
AdapterHub/bert-base-uncased-pf-trec
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:trec", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 75, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-ud_deprel` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_deprel", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:deprel/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]}
AdapterHub/bert-base-uncased-pf-ud_deprel
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:deprel/ud_ewt", "en", "dataset:universal_dependencies", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 51, 85, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-ud_en_ewt` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_en_ewt", source="hf", set_active=True) ``` ## Architecture & Training This adapter was trained using adapter-transformer's example script for dependency parsing. See https://github.com/Adapter-Hub/adapter-transformers/tree/master/examples/dependency-parsing. ## Evaluation results Scores achieved by dependency parsing adapters on the test set of UD English EWT after training: | Model | UAS | LAS | | --- | --- | --- | | `bert-base-uncased` | 91.74 | 89.15 | | `roberta-base` | 91.43 | 88.43 | ## Citation <!-- Add some description here -->
{"language": ["en"], "tags": ["bert", "adapterhub:dp/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]}
AdapterHub/bert-base-uncased-pf-ud_en_ewt
null
[ "adapter-transformers", "bert", "adapterhub:dp/ud_ewt", "en", "dataset:universal_dependencies", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us
Adapter 'AdapterHub/bert-base-uncased-pf-ud\_en\_ewt' for bert-base-uncased =========================================================================== An adapter for the 'bert-base-uncased' model that was trained on the dp/ud\_ewt dataset and includes a prediction head for dependency parsing. This adapter was created for usage with the adapter-transformers library. Usage ----- First, install 'adapter-transformers': *Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More* Now, the adapter can be loaded and activated like this: Architecture & Training ----------------------- This adapter was trained using adapter-transformer's example script for dependency parsing. See URL Evaluation results ------------------ Scores achieved by dependency parsing adapters on the test set of UD English EWT after training: Model: 'bert-base-uncased', UAS: 91.74, LAS: 89.15 Model: 'roberta-base', UAS: 91.43, LAS: 88.43
[]
[ "TAGS\n#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n" ]
[ 35 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-ud_pos` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-ud_pos", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]}
AdapterHub/bert-base-uncased-pf-ud_pos
null
[ "adapter-transformers", "bert", "token-classification", "adapterhub:pos/ud_ewt", "en", "dataset:universal_dependencies", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 50, 83, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-wic` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wic", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:wordsence/wic", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-wic
null
[ "adapter-transformers", "bert", "text-classification", "adapterhub:wordsence/wic", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 40, 78, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-wikihop` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wikihop", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/wikihop", "adapter-transformers"]}
AdapterHub/bert-base-uncased-pf-wikihop
null
[ "adapter-transformers", "bert", "question-answering", "adapterhub:qa/wikihop", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 41, 81, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-winogrande` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-winogrande", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["bert", "adapterhub:comsense/winogrande", "adapter-transformers"], "datasets": ["winogrande"]}
AdapterHub/bert-base-uncased-pf-winogrande
null
[ "adapter-transformers", "bert", "adapterhub:comsense/winogrande", "en", "dataset:winogrande", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 47, 84, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-wnut_17` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-wnut_17", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "bert", "adapter-transformers"], "datasets": ["wnut_17"]}
AdapterHub/bert-base-uncased-pf-wnut_17
null
[ "adapter-transformers", "bert", "token-classification", "en", "dataset:wnut_17", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 37, 80, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/bert-base-uncased-pf-yelp_polarity` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-yelp_polarity", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["yelp_polarity"]}
AdapterHub/bert-base-uncased-pf-yelp_polarity
null
[ "adapter-transformers", "bert", "text-classification", "en", "dataset:yelp_polarity", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased An adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 38, 81, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/bioASQyesno` for facebook/bart-base An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/bioasq](https://adapterhub.ml/explore/qa/bioasq/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("facebook/bart-base") adapter_name = model.load_adapter("AdapterHub/bioASQyesno", source="hf", set_active=True) ``` ## Architecture & Training Trained for 15 epochs with early stopping, a learning rate of 1e-4, and a batch size of 4 on the yes-no questions of the bioASQ 8b dataset. ## Evaluation results Achieved 75% accuracy on the test dataset of bioASQ 8b dataset. ## Citation <!-- Add some description here -->
{"tags": ["adapterhub:qa/bioasq", "adapter-transformers", "bart"]}
AdapterHub/bioASQyesno
null
[ "adapter-transformers", "bart", "adapterhub:qa/bioasq", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #adapter-transformers #bart #adapterhub-qa/bioasq #region-us
# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base An adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training Trained for 15 epochs with early stopping, a learning rate of 1e-4, and a batch size of 4 on the yes-no questions of the bioASQ 8b dataset. ## Evaluation results Achieved 75% accuracy on the test dataset of bioASQ 8b dataset.
[ "# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nTrained for 15 epochs with early stopping, a learning rate of 1e-4, and a batch size of 4 on the yes-no questions of the bioASQ 8b dataset.", "## Evaluation results\n\nAchieved 75% accuracy on the test dataset of bioASQ 8b dataset." ]
[ "TAGS\n#adapter-transformers #bart #adapterhub-qa/bioasq #region-us \n", "# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nTrained for 15 epochs with early stopping, a learning rate of 1e-4, and a batch size of 4 on the yes-no questions of the bioASQ 8b dataset.", "## Evaluation results\n\nAchieved 75% accuracy on the test dataset of bioASQ 8b dataset." ]
[ 24, 63, 53, 45, 22 ]
[ "TAGS\n#adapter-transformers #bart #adapterhub-qa/bioasq #region-us \n# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nTrained for 15 epochs with early stopping, a learning rate of 1e-4, and a batch size of 4 on the yes-no questions of the bioASQ 8b dataset.## Evaluation results\n\nAchieved 75% accuracy on the test dataset of bioASQ 8b dataset." ]
null
adapter-transformers
# Adapter `hSterz/narrativeqa` for facebook/bart-base An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/narrativeqa](https://adapterhub.ml/explore/qa/narrativeqa/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("facebook/bart-base") adapter_name = model.load_adapter("hSterz/narrativeqa", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["adapterhub:qa/narrativeqa", "adapter-transformers", "bart"], "datasets": ["narrativeqa"]}
AdapterHub/narrativeqa
null
[ "adapter-transformers", "bart", "adapterhub:qa/narrativeqa", "dataset:narrativeqa", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us
# Adapter 'hSterz/narrativeqa' for facebook/bart-base An adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us \n", "# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ 29, 58, 53, 5, 4 ]
[ "TAGS\n#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us \n# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training## Evaluation results" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-anli_r3` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-anli_r3", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["anli"]}
AdapterHub/roberta-base-pf-anli_r3
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:anli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base An adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 35, 69, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-art` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-art", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["art"]}
AdapterHub/roberta-base-pf-art
null
[ "adapter-transformers", "roberta", "en", "dataset:art", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base An adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 30, 65, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-boolq` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-boolq", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]}
AdapterHub/roberta-base-pf-boolq
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:qa/boolq", "en", "dataset:boolq", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base An adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 48, 71, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-cola` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cola", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:lingaccept/cola", "adapter-transformers"]}
AdapterHub/roberta-base-pf-cola
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:lingaccept/cola", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base An adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 41, 69, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-commonsense_qa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-commonsense_qa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-what-to-pre-train-on, title={What to Pre-Train on? Efficient Intermediate Task Selection}, author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2104.08247", pages = "to appear", } ```
{"language": ["en"], "tags": ["roberta", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]}
AdapterHub/roberta-base-pf-commonsense_qa
null
[ "adapter-transformers", "roberta", "adapterhub:comsense/csqa", "en", "dataset:commonsense_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base An adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 46, 74, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-comqa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-comqa", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["com_qa"]}
AdapterHub/roberta-base-pf-comqa
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:com_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base An adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 37, 69, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-conll2000` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2000", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]}
AdapterHub/roberta-base-pf-conll2000
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:chunk/conll2000", "en", "dataset:conll2000", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base An adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 49, 73, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-conll2003` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("roberta-base") adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-conll2003", source="hf") model.active_adapters = adapter_name ``` ## Architecture & Training The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs). ## Evaluation results Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results. ## Citation If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247): ```bibtex @inproceedings{poth-etal-2021-pre, title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection", author = {Poth, Clifton and Pfeiffer, Jonas and R{"u}ckl{'e}, Andreas and Gurevych, Iryna}, booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.827", pages = "10585--10605", } ```
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]}
AdapterHub/roberta-base-pf-conll2003
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:ner/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base An adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_ Now, the adapter can be loaded and activated like this: ## Architecture & Training The training code for this adapter is available at URL In particular, training configurations for all tasks can be found here. ## Evaluation results Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
[ "# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.", "## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]
[ 50, 74, 53, 30, 39 ]
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:## Architecture & Training\n\nThe training code for this adapter is available at URL\nIn particular, training configurations for all tasks can be found here.## Evaluation results\n\nRefer to the paper for more information on results.\n\nIf you use this adapter, please cite our paper \"What to Pre-Train on? Efficient Intermediate Task Selection\":" ]