Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
AWTStress/stress_classifier
https://huggingface.co/AWTStress/stress_classifier
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AWTStress/stress_classifier ### Model URL : https://huggingface.co/AWTStress/stress_classifier ### Model Description : This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
AWTStress/stress_score
https://huggingface.co/AWTStress/stress_score
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AWTStress/stress_score ### Model URL : https://huggingface.co/AWTStress/stress_score ### Model Description : This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
AZTEC/Arcane
https://huggingface.co/AZTEC/Arcane
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AZTEC/Arcane ### Model URL : https://huggingface.co/AZTEC/Arcane ### Model Description : No model card New: Create and edit this model card directly on the website!
Aakansha/hateSpeechClassification
https://huggingface.co/Aakansha/hateSpeechClassification
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aakansha/hateSpeechClassification ### Model URL : https://huggingface.co/Aakansha/hateSpeechClassification ### Model Description : No model card New: Create and edit this model card directly on the website!
Aakansha/hs
https://huggingface.co/Aakansha/hs
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aakansha/hs ### Model URL : https://huggingface.co/Aakansha/hs ### Model Description : No model card New: Create and edit this model card directly on the website!
Aarav/MeanMadCrazy_HarryPotterBot
https://huggingface.co/Aarav/MeanMadCrazy_HarryPotterBot
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aarav/MeanMadCrazy_HarryPotterBot ### Model URL : https://huggingface.co/Aarav/MeanMadCrazy_HarryPotterBot ### Model Description : No model card New: Create and edit this model card directly on the website!
AaravMonkey/modelRepo
https://huggingface.co/AaravMonkey/modelRepo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AaravMonkey/modelRepo ### Model URL : https://huggingface.co/AaravMonkey/modelRepo ### Model Description : No model card New: Create and edit this model card directly on the website!
Aarbor/xlm-roberta-base-finetuned-marc-en
https://huggingface.co/Aarbor/xlm-roberta-base-finetuned-marc-en
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aarbor/xlm-roberta-base-finetuned-marc-en ### Model URL : https://huggingface.co/Aarbor/xlm-roberta-base-finetuned-marc-en ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-base-timit-demo-colab
https://huggingface.co/Pinwheel/wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-base-timit-demo-colab ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-base-timit-demo-colab ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Pinwheel/wav2vec2-large-xls-r-1b-hi-v2
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-1b-hi-v2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-1b-hi-v2 ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-1b-hi-v2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-1b-hi
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-1b-hi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-1b-hi ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-1b-hi ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-1b-hindi
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-1b-hindi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-1b-hindi ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-1b-hindi ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-300m-50-hi
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-50-hi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-300m-50-hi ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-50-hi ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-300m-hi-v2
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-hi-v2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-300m-hi-v2 ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-hi-v2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-300m-hi-v3
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-hi-v3
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-300m-hi-v3 ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-hi-v3 ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-300m-hi
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-hi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-300m-hi ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-hi ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xls-r-300m-tr-colab
https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-tr-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xls-r-300m-tr-colab ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xls-r-300m-tr-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
Pinwheel/wav2vec2-large-xlsr-53-hi
https://huggingface.co/Pinwheel/wav2vec2-large-xlsr-53-hi
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Pinwheel/wav2vec2-large-xlsr-53-hi ### Model URL : https://huggingface.co/Pinwheel/wav2vec2-large-xlsr-53-hi ### Model Description : No model card New: Create and edit this model card directly on the website!
Ab0/autoencoder-keras-mnist-demo
https://huggingface.co/Ab0/autoencoder-keras-mnist-demo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ab0/autoencoder-keras-mnist-demo ### Model URL : https://huggingface.co/Ab0/autoencoder-keras-mnist-demo ### Model Description : No model card New: Create and edit this model card directly on the website!
Ab0/foo-model
https://huggingface.co/Ab0/foo-model
#FashionMNIST PyTorch Quick Start
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ab0/foo-model ### Model URL : https://huggingface.co/Ab0/foo-model ### Model Description : #FashionMNIST PyTorch Quick Start
Ab0/keras-dummy-functional-demo
https://huggingface.co/Ab0/keras-dummy-functional-demo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ab0/keras-dummy-functional-demo ### Model URL : https://huggingface.co/Ab0/keras-dummy-functional-demo ### Model Description : No model card New: Create and edit this model card directly on the website!
Ab0/keras-dummy-model-mixin-demo
https://huggingface.co/Ab0/keras-dummy-model-mixin-demo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ab0/keras-dummy-model-mixin-demo ### Model URL : https://huggingface.co/Ab0/keras-dummy-model-mixin-demo ### Model Description : No model card New: Create and edit this model card directly on the website!
Ab0/keras-dummy-sequential-demo
https://huggingface.co/Ab0/keras-dummy-sequential-demo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ab0/keras-dummy-sequential-demo ### Model URL : https://huggingface.co/Ab0/keras-dummy-sequential-demo ### Model Description : No model card New: Create and edit this model card directly on the website!
Ab2021/bookst5
https://huggingface.co/Ab2021/bookst5
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ab2021/bookst5 ### Model URL : https://huggingface.co/Ab2021/bookst5 ### Model Description : No model card New: Create and edit this model card directly on the website!
Abab/Test_Albert
https://huggingface.co/Abab/Test_Albert
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abab/Test_Albert ### Model URL : https://huggingface.co/Abab/Test_Albert ### Model Description : No model card New: Create and edit this model card directly on the website!
AbdelrahmanZayed/my-awesome-model
https://huggingface.co/AbdelrahmanZayed/my-awesome-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbdelrahmanZayed/my-awesome-model ### Model URL : https://huggingface.co/AbdelrahmanZayed/my-awesome-model ### Model Description : No model card New: Create and edit this model card directly on the website!
AbderrahimRezki/DialoGPT-small-harry
https://huggingface.co/AbderrahimRezki/DialoGPT-small-harry
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbderrahimRezki/DialoGPT-small-harry ### Model URL : https://huggingface.co/AbderrahimRezki/DialoGPT-small-harry ### Model Description : No model card New: Create and edit this model card directly on the website!
AbderrahimRezki/HarryPotterBot
https://huggingface.co/AbderrahimRezki/HarryPotterBot
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbderrahimRezki/HarryPotterBot ### Model URL : https://huggingface.co/AbderrahimRezki/HarryPotterBot ### Model Description : No model card New: Create and edit this model card directly on the website!
Abdou/arabert-base-algerian
https://huggingface.co/Abdou/arabert-base-algerian
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abdou/arabert-base-algerian ### Model URL : https://huggingface.co/Abdou/arabert-base-algerian ### Model Description : These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Abdou/arabert-large-algerian
https://huggingface.co/Abdou/arabert-large-algerian
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abdou/arabert-large-algerian ### Model URL : https://huggingface.co/Abdou/arabert-large-algerian ### Model Description : These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Abdou/arabert-medium-algerian
https://huggingface.co/Abdou/arabert-medium-algerian
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abdou/arabert-medium-algerian ### Model URL : https://huggingface.co/Abdou/arabert-medium-algerian ### Model Description : These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Abdou/arabert-mini-algerian
https://huggingface.co/Abdou/arabert-mini-algerian
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abdou/arabert-mini-algerian ### Model URL : https://huggingface.co/Abdou/arabert-mini-algerian ### Model Description : These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube videos in Algerian dialect. The models are evaluated on the testing set: If you find our work useful, please cite it as follows:
Abdullaziz/model1
https://huggingface.co/Abdullaziz/model1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abdullaziz/model1 ### Model URL : https://huggingface.co/Abdullaziz/model1 ### Model Description : No model card New: Create and edit this model card directly on the website!
AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa
https://huggingface.co/AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa ### Model URL : https://huggingface.co/AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa ### Model Description : No model card New: Create and edit this model card directly on the website!
AbhijeetA/PIE
https://huggingface.co/AbhijeetA/PIE
Model details available here
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbhijeetA/PIE ### Model URL : https://huggingface.co/AbhijeetA/PIE ### Model Description : Model details available here
Abhilash/BERTBasePyTorch
https://huggingface.co/Abhilash/BERTBasePyTorch
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abhilash/BERTBasePyTorch ### Model URL : https://huggingface.co/Abhilash/BERTBasePyTorch ### Model Description : No model card New: Create and edit this model card directly on the website!
AbhinavSaiTheGreat/DialoGPT-small-harrypotter
https://huggingface.co/AbhinavSaiTheGreat/DialoGPT-small-harrypotter
#HarryPotter DialoGPT Model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbhinavSaiTheGreat/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/AbhinavSaiTheGreat/DialoGPT-small-harrypotter ### Model Description : #HarryPotter DialoGPT Model
Abhishek4/Cuad_Finetune_roberta
https://huggingface.co/Abhishek4/Cuad_Finetune_roberta
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abhishek4/Cuad_Finetune_roberta ### Model URL : https://huggingface.co/Abhishek4/Cuad_Finetune_roberta ### Model Description : No model card New: Create and edit this model card directly on the website!
Abi9x/DiabloGPT-large-Axel
https://huggingface.co/Abi9x/DiabloGPT-large-Axel
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abi9x/DiabloGPT-large-Axel ### Model URL : https://huggingface.co/Abi9x/DiabloGPT-large-Axel ### Model Description : No model card New: Create and edit this model card directly on the website!
AbidHasan95/movieHunt2
https://huggingface.co/AbidHasan95/movieHunt2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbidHasan95/movieHunt2 ### Model URL : https://huggingface.co/AbidHasan95/movieHunt2 ### Model Description : No model card New: Create and edit this model card directly on the website!
AbidineVall/my-new-shiny-tokenizer
https://huggingface.co/AbidineVall/my-new-shiny-tokenizer
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbidineVall/my-new-shiny-tokenizer ### Model URL : https://huggingface.co/AbidineVall/my-new-shiny-tokenizer ### Model Description : No model card New: Create and edit this model card directly on the website!
Abirate/bert_fine_tuned_cola
https://huggingface.co/Abirate/bert_fine_tuned_cola
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives: The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.  By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abirate/bert_fine_tuned_cola ### Model URL : https://huggingface.co/Abirate/bert_fine_tuned_cola ### Model Description : BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives: The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.  By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable
Abirate/code_net_new_tokenizer_from_WPiece_bert_algorithm
https://huggingface.co/Abirate/code_net_new_tokenizer_from_WPiece_bert_algorithm
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abirate/code_net_new_tokenizer_from_WPiece_bert_algorithm ### Model URL : https://huggingface.co/Abirate/code_net_new_tokenizer_from_WPiece_bert_algorithm ### Model Description : No model card New: Create and edit this model card directly on the website!
Abirate/code_net_similarity_model_sub23_fbert
https://huggingface.co/Abirate/code_net_similarity_model_sub23_fbert
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abirate/code_net_similarity_model_sub23_fbert ### Model URL : https://huggingface.co/Abirate/code_net_similarity_model_sub23_fbert ### Model Description : No model card New: Create and edit this model card directly on the website!
Abirate/gpt_3_finetuned_multi_x_science
https://huggingface.co/Abirate/gpt_3_finetuned_multi_x_science
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI GPT-Neo (125M) is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. and first released in this repository. The Open Source version of GPT-3: GPT-Neo(125M) has been fine-tuned on a dataset called "Multi-XScience": Multi-XScience_Repository: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles. I first fine-tuned and then deployed it using Google "Material Design" (on Anvil): Abir Scientific text Generator By fine-tuning GPT-Neo(Open Source version of GPT-3), on Multi-XScience dataset, the model is now able to generate scientific texts(even better than GPT-J(6B).Try putting the prompt "attention is all" on both my Abir Scientific text Generator and on the GPT-J Eleuther.ai Demo to understand what I mean.And Here's a demonstration video for this. Video real-time Demontration
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abirate/gpt_3_finetuned_multi_x_science ### Model URL : https://huggingface.co/Abirate/gpt_3_finetuned_multi_x_science ### Model Description : Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI GPT-Neo (125M) is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 125M represents the number of parameters of this particular pre-trained model. and first released in this repository. The Open Source version of GPT-3: GPT-Neo(125M) has been fine-tuned on a dataset called "Multi-XScience": Multi-XScience_Repository: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles. I first fine-tuned and then deployed it using Google "Material Design" (on Anvil): Abir Scientific text Generator By fine-tuning GPT-Neo(Open Source version of GPT-3), on Multi-XScience dataset, the model is now able to generate scientific texts(even better than GPT-J(6B).Try putting the prompt "attention is all" on both my Abir Scientific text Generator and on the GPT-J Eleuther.ai Demo to understand what I mean.And Here's a demonstration video for this. Video real-time Demontration
Abobus/Fu
https://huggingface.co/Abobus/Fu
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abobus/Fu ### Model URL : https://huggingface.co/Abobus/Fu ### Model Description : No model card New: Create and edit this model card directly on the website!
Abolior/audiobot
https://huggingface.co/Abolior/audiobot
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abolior/audiobot ### Model URL : https://huggingface.co/Abolior/audiobot ### Model Description : No model card New: Create and edit this model card directly on the website!
Abozoroov/Me
https://huggingface.co/Abozoroov/Me
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Abozoroov/Me ### Model URL : https://huggingface.co/Abozoroov/Me ### Model Description : No model card New: Create and edit this model card directly on the website!
AbyV/test
https://huggingface.co/AbyV/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AbyV/test ### Model URL : https://huggingface.co/AbyV/test ### Model Description : No model card New: Create and edit this model card directly on the website!
AccurateIsaiah/DialoGPT-small-jefftastic
https://huggingface.co/AccurateIsaiah/DialoGPT-small-jefftastic
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AccurateIsaiah/DialoGPT-small-jefftastic ### Model URL : https://huggingface.co/AccurateIsaiah/DialoGPT-small-jefftastic ### Model Description :
AccurateIsaiah/DialoGPT-small-mozark
https://huggingface.co/AccurateIsaiah/DialoGPT-small-mozark
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AccurateIsaiah/DialoGPT-small-mozark ### Model URL : https://huggingface.co/AccurateIsaiah/DialoGPT-small-mozark ### Model Description :
AccurateIsaiah/DialoGPT-small-mozarkv2
https://huggingface.co/AccurateIsaiah/DialoGPT-small-mozarkv2
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AccurateIsaiah/DialoGPT-small-mozarkv2 ### Model URL : https://huggingface.co/AccurateIsaiah/DialoGPT-small-mozarkv2 ### Model Description :
AccurateIsaiah/DialoGPT-small-sinclair
https://huggingface.co/AccurateIsaiah/DialoGPT-small-sinclair
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AccurateIsaiah/DialoGPT-small-sinclair ### Model URL : https://huggingface.co/AccurateIsaiah/DialoGPT-small-sinclair ### Model Description :
ActivationAI/distilbert-base-uncased-finetuned-emotion
https://huggingface.co/ActivationAI/distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ActivationAI/distilbert-base-uncased-finetuned-emotion ### Model URL : https://huggingface.co/ActivationAI/distilbert-base-uncased-finetuned-emotion ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
AdWeeb/HTI_mbert
https://huggingface.co/AdWeeb/HTI_mbert
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdWeeb/HTI_mbert ### Model URL : https://huggingface.co/AdWeeb/HTI_mbert ### Model Description : No model card New: Create and edit this model card directly on the website!
Adalid1985/Adalidarcane
https://huggingface.co/Adalid1985/Adalidarcane
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Adalid1985/Adalidarcane ### Model URL : https://huggingface.co/Adalid1985/Adalidarcane ### Model Description : No model card New: Create and edit this model card directly on the website!
AdapterHub/bert-base-uncased-pf-anli_r3
https://huggingface.co/AdapterHub/bert-base-uncased-pf-anli_r3
An adapter for the bert-base-uncased model that was trained on the anli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-anli_r3 ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-anli_r3 ### Model Description : An adapter for the bert-base-uncased model that was trained on the anli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-art
https://huggingface.co/AdapterHub/bert-base-uncased-pf-art
An adapter for the bert-base-uncased model that was trained on the art dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-art ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-art ### Model Description : An adapter for the bert-base-uncased model that was trained on the art dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-boolq
https://huggingface.co/AdapterHub/bert-base-uncased-pf-boolq
An adapter for the bert-base-uncased model that was trained on the qa/boolq dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-boolq ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-boolq ### Model Description : An adapter for the bert-base-uncased model that was trained on the qa/boolq dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-cola
https://huggingface.co/AdapterHub/bert-base-uncased-pf-cola
An adapter for the bert-base-uncased model that was trained on the lingaccept/cola dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-cola ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-cola ### Model Description : An adapter for the bert-base-uncased model that was trained on the lingaccept/cola dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-commonsense_qa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-commonsense_qa
An adapter for the bert-base-uncased model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-commonsense_qa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-commonsense_qa ### Model Description : An adapter for the bert-base-uncased model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-comqa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-comqa
An adapter for the bert-base-uncased model that was trained on the com_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-comqa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-comqa ### Model Description : An adapter for the bert-base-uncased model that was trained on the com_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-conll2000
https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2000
An adapter for the bert-base-uncased model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-conll2000 ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2000 ### Model Description : An adapter for the bert-base-uncased model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-conll2003
https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003
An adapter for the bert-base-uncased model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-conll2003 ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003 ### Model Description : An adapter for the bert-base-uncased model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-conll2003_pos
https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003_pos
An adapter for the bert-base-uncased model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-conll2003_pos ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-conll2003_pos ### Model Description : An adapter for the bert-base-uncased model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-copa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-copa
An adapter for the bert-base-uncased model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-copa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-copa ### Model Description : An adapter for the bert-base-uncased model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-cosmos_qa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-cosmos_qa
An adapter for the bert-base-uncased model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-cosmos_qa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-cosmos_qa ### Model Description : An adapter for the bert-base-uncased model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-cq
https://huggingface.co/AdapterHub/bert-base-uncased-pf-cq
An adapter for the bert-base-uncased model that was trained on the qa/cq dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-cq ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-cq ### Model Description : An adapter for the bert-base-uncased model that was trained on the qa/cq dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-drop
https://huggingface.co/AdapterHub/bert-base-uncased-pf-drop
An adapter for the bert-base-uncased model that was trained on the drop dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-drop ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-drop ### Model Description : An adapter for the bert-base-uncased model that was trained on the drop dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-duorc_p
https://huggingface.co/AdapterHub/bert-base-uncased-pf-duorc_p
An adapter for the bert-base-uncased model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-duorc_p ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-duorc_p ### Model Description : An adapter for the bert-base-uncased model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-duorc_s
https://huggingface.co/AdapterHub/bert-base-uncased-pf-duorc_s
An adapter for the bert-base-uncased model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-duorc_s ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-duorc_s ### Model Description : An adapter for the bert-base-uncased model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-emo
https://huggingface.co/AdapterHub/bert-base-uncased-pf-emo
An adapter for the bert-base-uncased model that was trained on the emo dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-emo ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-emo ### Model Description : An adapter for the bert-base-uncased model that was trained on the emo dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-emotion
https://huggingface.co/AdapterHub/bert-base-uncased-pf-emotion
An adapter for the bert-base-uncased model that was trained on the emotion dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-emotion ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-emotion ### Model Description : An adapter for the bert-base-uncased model that was trained on the emotion dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-fce_error_detection
https://huggingface.co/AdapterHub/bert-base-uncased-pf-fce_error_detection
An adapter for the bert-base-uncased model that was trained on the ged/fce dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-fce_error_detection ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-fce_error_detection ### Model Description : An adapter for the bert-base-uncased model that was trained on the ged/fce dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-hellaswag
https://huggingface.co/AdapterHub/bert-base-uncased-pf-hellaswag
An adapter for the bert-base-uncased model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-hellaswag ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-hellaswag ### Model Description : An adapter for the bert-base-uncased model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-hotpotqa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-hotpotqa
An adapter for the bert-base-uncased model that was trained on the hotpot_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-hotpotqa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-hotpotqa ### Model Description : An adapter for the bert-base-uncased model that was trained on the hotpot_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-imdb
https://huggingface.co/AdapterHub/bert-base-uncased-pf-imdb
An adapter for the bert-base-uncased model that was trained on the sentiment/imdb dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-imdb ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-imdb ### Model Description : An adapter for the bert-base-uncased model that was trained on the sentiment/imdb dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-mit_movie_trivia
https://huggingface.co/AdapterHub/bert-base-uncased-pf-mit_movie_trivia
An adapter for the bert-base-uncased model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-mit_movie_trivia ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-mit_movie_trivia ### Model Description : An adapter for the bert-base-uncased model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-mnli
https://huggingface.co/AdapterHub/bert-base-uncased-pf-mnli
An adapter for the bert-base-uncased model that was trained on the nli/multinli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-mnli ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-mnli ### Model Description : An adapter for the bert-base-uncased model that was trained on the nli/multinli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-mrpc
https://huggingface.co/AdapterHub/bert-base-uncased-pf-mrpc
An adapter for the bert-base-uncased model that was trained on the sts/mrpc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-mrpc ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-mrpc ### Model Description : An adapter for the bert-base-uncased model that was trained on the sts/mrpc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-multirc
https://huggingface.co/AdapterHub/bert-base-uncased-pf-multirc
An adapter for the bert-base-uncased model that was trained on the rc/multirc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-multirc ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-multirc ### Model Description : An adapter for the bert-base-uncased model that was trained on the rc/multirc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-newsqa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-newsqa
An adapter for the bert-base-uncased model that was trained on the newsqa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-newsqa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-newsqa ### Model Description : An adapter for the bert-base-uncased model that was trained on the newsqa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-pmb_sem_tagging
https://huggingface.co/AdapterHub/bert-base-uncased-pf-pmb_sem_tagging
An adapter for the bert-base-uncased model that was trained on the semtag/pmb dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-pmb_sem_tagging ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-pmb_sem_tagging ### Model Description : An adapter for the bert-base-uncased model that was trained on the semtag/pmb dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-qnli
https://huggingface.co/AdapterHub/bert-base-uncased-pf-qnli
An adapter for the bert-base-uncased model that was trained on the nli/qnli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-qnli ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-qnli ### Model Description : An adapter for the bert-base-uncased model that was trained on the nli/qnli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-qqp
https://huggingface.co/AdapterHub/bert-base-uncased-pf-qqp
An adapter for the bert-base-uncased model that was trained on the sts/qqp dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-qqp ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-qqp ### Model Description : An adapter for the bert-base-uncased model that was trained on the sts/qqp dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-quail
https://huggingface.co/AdapterHub/bert-base-uncased-pf-quail
An adapter for the bert-base-uncased model that was trained on the quail dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-quail ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-quail ### Model Description : An adapter for the bert-base-uncased model that was trained on the quail dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-quartz
https://huggingface.co/AdapterHub/bert-base-uncased-pf-quartz
An adapter for the bert-base-uncased model that was trained on the quartz dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-quartz ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-quartz ### Model Description : An adapter for the bert-base-uncased model that was trained on the quartz dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-quoref
https://huggingface.co/AdapterHub/bert-base-uncased-pf-quoref
An adapter for the bert-base-uncased model that was trained on the quoref dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-quoref ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-quoref ### Model Description : An adapter for the bert-base-uncased model that was trained on the quoref dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-race
https://huggingface.co/AdapterHub/bert-base-uncased-pf-race
An adapter for the bert-base-uncased model that was trained on the rc/race dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-race ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-race ### Model Description : An adapter for the bert-base-uncased model that was trained on the rc/race dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-record
https://huggingface.co/AdapterHub/bert-base-uncased-pf-record
An adapter for the bert-base-uncased model that was trained on the rc/record dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-record ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-record ### Model Description : An adapter for the bert-base-uncased model that was trained on the rc/record dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-rotten_tomatoes
https://huggingface.co/AdapterHub/bert-base-uncased-pf-rotten_tomatoes
An adapter for the bert-base-uncased model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-rotten_tomatoes ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-rotten_tomatoes ### Model Description : An adapter for the bert-base-uncased model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-rte
https://huggingface.co/AdapterHub/bert-base-uncased-pf-rte
An adapter for the bert-base-uncased model that was trained on the nli/rte dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-rte ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-rte ### Model Description : An adapter for the bert-base-uncased model that was trained on the nli/rte dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-scicite
https://huggingface.co/AdapterHub/bert-base-uncased-pf-scicite
An adapter for the bert-base-uncased model that was trained on the scicite dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-scicite ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-scicite ### Model Description : An adapter for the bert-base-uncased model that was trained on the scicite dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-scitail
https://huggingface.co/AdapterHub/bert-base-uncased-pf-scitail
An adapter for the bert-base-uncased model that was trained on the nli/scitail dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-scitail ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-scitail ### Model Description : An adapter for the bert-base-uncased model that was trained on the nli/scitail dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-sick
https://huggingface.co/AdapterHub/bert-base-uncased-pf-sick
An adapter for the bert-base-uncased model that was trained on the nli/sick dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-sick ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-sick ### Model Description : An adapter for the bert-base-uncased model that was trained on the nli/sick dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-snli
https://huggingface.co/AdapterHub/bert-base-uncased-pf-snli
An adapter for the bert-base-uncased model that was trained on the snli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-snli ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-snli ### Model Description : An adapter for the bert-base-uncased model that was trained on the snli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-social_i_qa
https://huggingface.co/AdapterHub/bert-base-uncased-pf-social_i_qa
An adapter for the bert-base-uncased model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-social_i_qa ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-social_i_qa ### Model Description : An adapter for the bert-base-uncased model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-squad
https://huggingface.co/AdapterHub/bert-base-uncased-pf-squad
An adapter for the bert-base-uncased model that was trained on the qa/squad1 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-squad ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-squad ### Model Description : An adapter for the bert-base-uncased model that was trained on the qa/squad1 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-squad_v2
https://huggingface.co/AdapterHub/bert-base-uncased-pf-squad_v2
An adapter for the bert-base-uncased model that was trained on the qa/squad2 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-squad_v2 ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-squad_v2 ### Model Description : An adapter for the bert-base-uncased model that was trained on the qa/squad2 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
AdapterHub/bert-base-uncased-pf-sst2
https://huggingface.co/AdapterHub/bert-base-uncased-pf-sst2
An adapter for the bert-base-uncased model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AdapterHub/bert-base-uncased-pf-sst2 ### Model URL : https://huggingface.co/AdapterHub/bert-base-uncased-pf-sst2 ### Model Description : An adapter for the bert-base-uncased model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. First, install adapter-transformers: Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More Now, the adapter can be loaded and activated like this: The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer. In particular, training configurations for all tasks can be found here. Refer to the paper for more information on results. If you use this adapter, please cite our paper "What to Pre-Train on? Efficient Intermediate Task Selection":