all-MiniLM-L6-v2 trained on MEDI-MTEB triplets

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Datasets:
    • NQ
    • pubmed
    • specter_train_triples
    • S2ORC_citations_abstracts
    • fever
    • gooaq_pairs
    • codesearchnet
    • wikihow
    • WikiAnswers
    • eli5_question_answer
    • amazon-qa
    • medmcqa
    • zeroshot
    • TriviaQA_pairs
    • PAQ_pairs
    • stackexchange_duplicate_questions_title-body_title-body
    • trex
    • flickr30k_captions
    • hotpotqa
    • task671_ambigqa_text_generation
    • task061_ropes_answer_generation
    • task285_imdb_answer_generation
    • task905_hate_speech_offensive_classification
    • task566_circa_classification
    • task184_snli_entailment_to_neutral_text_modification
    • task280_stereoset_classification_stereotype_type
    • task1599_smcalflow_classification
    • task1384_deal_or_no_dialog_classification
    • task591_sciq_answer_generation
    • task823_peixian-rtgender_sentiment_analysis
    • task023_cosmosqa_question_generation
    • task900_freebase_qa_category_classification
    • task924_event2mind_word_generation
    • task152_tomqa_find_location_easy_noise
    • task1368_healthfact_sentence_generation
    • task1661_super_glue_classification
    • task1187_politifact_classification
    • task1728_web_nlg_data_to_text
    • task112_asset_simple_sentence_identification
    • task1340_msr_text_compression_compression
    • task072_abductivenli_answer_generation
    • task1504_hatexplain_answer_generation
    • task684_online_privacy_policy_text_information_type_generation
    • task1290_xsum_summarization
    • task075_squad1.1_answer_generation
    • task1587_scifact_classification
    • task384_socialiqa_question_classification
    • task1555_scitail_answer_generation
    • task1532_daily_dialog_emotion_classification
    • task239_tweetqa_answer_generation
    • task596_mocha_question_generation
    • task1411_dart_subject_identification
    • task1359_numer_sense_answer_generation
    • task329_gap_classification
    • task220_rocstories_title_classification
    • task316_crows-pairs_classification_stereotype
    • task495_semeval_headline_classification
    • task1168_brown_coarse_pos_tagging
    • task348_squad2.0_unanswerable_question_generation
    • task049_multirc_questions_needed_to_answer
    • task1534_daily_dialog_question_classification
    • task322_jigsaw_classification_threat
    • task295_semeval_2020_task4_commonsense_reasoning
    • task186_snli_contradiction_to_entailment_text_modification
    • task034_winogrande_question_modification_object
    • task160_replace_letter_in_a_sentence
    • task469_mrqa_answer_generation
    • task105_story_cloze-rocstories_sentence_generation
    • task649_race_blank_question_generation
    • task1536_daily_dialog_happiness_classification
    • task683_online_privacy_policy_text_purpose_answer_generation
    • task024_cosmosqa_answer_generation
    • task584_udeps_eng_fine_pos_tagging
    • task066_timetravel_binary_consistency_classification
    • task413_mickey_en_sentence_perturbation_generation
    • task182_duorc_question_generation
    • task028_drop_answer_generation
    • task1601_webquestions_answer_generation
    • task1295_adversarial_qa_question_answering
    • task201_mnli_neutral_classification
    • task038_qasc_combined_fact
    • task293_storycommonsense_emotion_text_generation
    • task572_recipe_nlg_text_generation
    • task517_emo_classify_emotion_of_dialogue
    • task382_hybridqa_answer_generation
    • task176_break_decompose_questions
    • task1291_multi_news_summarization
    • task155_count_nouns_verbs
    • task031_winogrande_question_generation_object
    • task279_stereoset_classification_stereotype
    • task1336_peixian_equity_evaluation_corpus_gender_classifier
    • task508_scruples_dilemmas_more_ethical_isidentifiable
    • task518_emo_different_dialogue_emotions
    • task077_splash_explanation_to_sql
    • task923_event2mind_classifier
    • task470_mrqa_question_generation
    • task638_multi_woz_classification
    • task1412_web_questions_question_answering
    • task847_pubmedqa_question_generation
    • task678_ollie_actual_relationship_answer_generation
    • task290_tellmewhy_question_answerability
    • task575_air_dialogue_classification
    • task189_snli_neutral_to_contradiction_text_modification
    • task026_drop_question_generation
    • task162_count_words_starting_with_letter
    • task079_conala_concat_strings
    • task610_conllpp_ner
    • task046_miscellaneous_question_typing
    • task197_mnli_domain_answer_generation
    • task1325_qa_zre_question_generation_on_subject_relation
    • task430_senteval_subject_count
    • task672_nummersense
    • task402_grailqa_paraphrase_generation
    • task904_hate_speech_offensive_classification
    • task192_hotpotqa_sentence_generation
    • task069_abductivenli_classification
    • task574_air_dialogue_sentence_generation
    • task187_snli_entailment_to_contradiction_text_modification
    • task749_glucose_reverse_cause_emotion_detection
    • task1552_scitail_question_generation
    • task750_aqua_multiple_choice_answering
    • task327_jigsaw_classification_toxic
    • task1502_hatexplain_classification
    • task328_jigsaw_classification_insult
    • task304_numeric_fused_head_resolution
    • task1293_kilt_tasks_hotpotqa_question_answering
    • task216_rocstories_correct_answer_generation
    • task1326_qa_zre_question_generation_from_answer
    • task1338_peixian_equity_evaluation_corpus_sentiment_classifier
    • task1729_personachat_generate_next
    • task1202_atomic_classification_xneed
    • task400_paws_paraphrase_classification
    • task502_scruples_anecdotes_whoiswrong_verification
    • task088_identify_typo_verification
    • task221_rocstories_two_choice_classification
    • task200_mnli_entailment_classification
    • task074_squad1.1_question_generation
    • task581_socialiqa_question_generation
    • task1186_nne_hrngo_classification
    • task898_freebase_qa_answer_generation
    • task1408_dart_similarity_classification
    • task168_strategyqa_question_decomposition
    • task1357_xlsum_summary_generation
    • task390_torque_text_span_selection
    • task165_mcscript_question_answering_commonsense
    • task1533_daily_dialog_formal_classification
    • task002_quoref_answer_generation
    • task1297_qasc_question_answering
    • task305_jeopardy_answer_generation_normal
    • task029_winogrande_full_object
    • task1327_qa_zre_answer_generation_from_question
    • task326_jigsaw_classification_obscene
    • task1542_every_ith_element_from_starting
    • task570_recipe_nlg_ner_generation
    • task1409_dart_text_generation
    • task401_numeric_fused_head_reference
    • task846_pubmedqa_classification
    • task1712_poki_classification
    • task344_hybridqa_answer_generation
    • task875_emotion_classification
    • task1214_atomic_classification_xwant
    • task106_scruples_ethical_judgment
    • task238_iirc_answer_from_passage_answer_generation
    • task1391_winogrande_easy_answer_generation
    • task195_sentiment140_classification
    • task163_count_words_ending_with_letter
    • task579_socialiqa_classification
    • task569_recipe_nlg_text_generation
    • task1602_webquestion_question_genreation
    • task747_glucose_cause_emotion_detection
    • task219_rocstories_title_answer_generation
    • task178_quartz_question_answering
    • task103_facts2story_long_text_generation
    • task301_record_question_generation
    • task1369_healthfact_sentence_generation
    • task515_senteval_odd_word_out
    • task496_semeval_answer_generation
    • task1658_billsum_summarization
    • task1204_atomic_classification_hinderedby
    • task1392_superglue_multirc_answer_verification
    • task306_jeopardy_answer_generation_double
    • task1286_openbookqa_question_answering
    • task159_check_frequency_of_words_in_sentence_pair
    • task151_tomqa_find_location_easy_clean
    • task323_jigsaw_classification_sexually_explicit
    • task037_qasc_generate_related_fact
    • task027_drop_answer_type_generation
    • task1596_event2mind_text_generation_2
    • task141_odd-man-out_classification_category
    • task194_duorc_answer_generation
    • task679_hope_edi_english_text_classification
    • task246_dream_question_generation
    • task1195_disflqa_disfluent_to_fluent_conversion
    • task065_timetravel_consistent_sentence_classification
    • task351_winomt_classification_gender_identifiability_anti
    • task580_socialiqa_answer_generation
    • task583_udeps_eng_coarse_pos_tagging
    • task202_mnli_contradiction_classification
    • task222_rocstories_two_chioce_slotting_classification
    • task498_scruples_anecdotes_whoiswrong_classification
    • task067_abductivenli_answer_generation
    • task616_cola_classification
    • task286_olid_offense_judgment
    • task188_snli_neutral_to_entailment_text_modification
    • task223_quartz_explanation_generation
    • task820_protoqa_answer_generation
    • task196_sentiment140_answer_generation
    • task1678_mathqa_answer_selection
    • task349_squad2.0_answerable_unanswerable_question_classification
    • task154_tomqa_find_location_hard_noise
    • task333_hateeval_classification_hate_en
    • task235_iirc_question_from_subtext_answer_generation
    • task1554_scitail_classification
    • task210_logic2text_structured_text_generation
    • task035_winogrande_question_modification_person
    • task230_iirc_passage_classification
    • task1356_xlsum_title_generation
    • task1726_mathqa_correct_answer_generation
    • task302_record_classification
    • task380_boolq_yes_no_question
    • task212_logic2text_classification
    • task748_glucose_reverse_cause_event_detection
    • task834_mathdataset_classification
    • task350_winomt_classification_gender_identifiability_pro
    • task191_hotpotqa_question_generation
    • task236_iirc_question_from_passage_answer_generation
    • task217_rocstories_ordering_answer_generation
    • task568_circa_question_generation
    • task614_glucose_cause_event_detection
    • task361_spolin_yesand_prompt_response_classification
    • task421_persent_sentence_sentiment_classification
    • task203_mnli_sentence_generation
    • task420_persent_document_sentiment_classification
    • task153_tomqa_find_location_hard_clean
    • task346_hybridqa_classification
    • task1211_atomic_classification_hassubevent
    • task360_spolin_yesand_response_generation
    • task510_reddit_tifu_title_summarization
    • task511_reddit_tifu_long_text_summarization
    • task345_hybridqa_answer_generation
    • task270_csrg_counterfactual_context_generation
    • task307_jeopardy_answer_generation_final
    • task001_quoref_question_generation
    • task089_swap_words_verification
    • task1196_atomic_classification_oeffect
    • task080_piqa_answer_generation
    • task1598_nyc_long_text_generation
    • task240_tweetqa_question_generation
    • task615_moviesqa_answer_generation
    • task1347_glue_sts-b_similarity_classification
    • task114_is_the_given_word_longest
    • task292_storycommonsense_character_text_generation
    • task115_help_advice_classification
    • task431_senteval_object_count
    • task1360_numer_sense_multiple_choice_qa_generation
    • task177_para-nmt_paraphrasing
    • task132_dais_text_modification
    • task269_csrg_counterfactual_story_generation
    • task233_iirc_link_exists_classification
    • task161_count_words_containing_letter
    • task1205_atomic_classification_isafter
    • task571_recipe_nlg_ner_generation
    • task1292_yelp_review_full_text_categorization
    • task428_senteval_inversion
    • task311_race_question_generation
    • task429_senteval_tense
    • task403_creak_commonsense_inference
    • task929_products_reviews_classification
    • task582_naturalquestion_answer_generation
    • task237_iirc_answer_from_subtext_answer_generation
    • task050_multirc_answerability
    • task184_break_generate_question
    • task669_ambigqa_answer_generation
    • task169_strategyqa_sentence_generation
    • task500_scruples_anecdotes_title_generation
    • task241_tweetqa_classification
    • task1345_glue_qqp_question_paraprashing
    • task218_rocstories_swap_order_answer_generation
    • task613_politifact_text_generation
    • task1167_penn_treebank_coarse_pos_tagging
    • task1422_mathqa_physics
    • task247_dream_answer_generation
    • task199_mnli_classification
    • task164_mcscript_question_answering_text
    • task1541_agnews_classification
    • task516_senteval_conjoints_inversion
    • task294_storycommonsense_motiv_text_generation
    • task501_scruples_anecdotes_post_type_verification
    • task213_rocstories_correct_ending_classification
    • task821_protoqa_question_generation
    • task493_review_polarity_classification
    • task308_jeopardy_answer_generation_all
    • task1595_event2mind_text_generation_1
    • task040_qasc_question_generation
    • task231_iirc_link_classification
    • task1727_wiqa_what_is_the_effect
    • task578_curiosity_dialogs_answer_generation
    • task310_race_classification
    • task309_race_answer_generation
    • task379_agnews_topic_classification
    • task030_winogrande_full_person
    • task1540_parsed_pdfs_summarization
    • task039_qasc_find_overlapping_words
    • task1206_atomic_classification_isbefore
    • task157_count_vowels_and_consonants
    • task339_record_answer_generation
    • task453_swag_answer_generation
    • task848_pubmedqa_classification
    • task673_google_wellformed_query_classification
    • task676_ollie_relationship_answer_generation
    • task268_casehold_legal_answer_generation
    • task844_financial_phrasebank_classification
    • task330_gap_answer_generation
    • task595_mocha_answer_generation
    • task1285_kpa_keypoint_matching
    • task234_iirc_passage_line_answer_generation
    • task494_review_polarity_answer_generation
    • task670_ambigqa_question_generation
    • task289_gigaword_summarization
    • npr
    • nli
    • SimpleWiki
    • amazon_review_2018
    • ccnews_title_text
    • agnews
    • xsum
    • msmarco
    • yahoo_answers_title_answer
    • squad_pairs
    • wow
    • mteb-amazon_counterfactual-avs_triplets
    • mteb-amazon_massive_intent-avs_triplets
    • mteb-amazon_massive_scenario-avs_triplets
    • mteb-amazon_reviews_multi-avs_triplets
    • mteb-banking77-avs_triplets
    • mteb-emotion-avs_triplets
    • mteb-imdb-avs_triplets
    • mteb-mtop_domain-avs_triplets
    • mteb-mtop_intent-avs_triplets
    • mteb-toxic_conversations_50k-avs_triplets
    • mteb-tweet_sentiment_extraction-avs_triplets
    • covid-bing-query-gpt4-avs_triplets
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): RandomProjection({'in_features': 384, 'out_features': 768, 'seed': 42})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final")
# Run inference
sentences = [
    'who does the chief risk officer report to',
    "Chief risk officer Chief risk officer The chief risk officer (CRO) or chief risk management officer (CRMO) of a firm or corporation is the executive accountable for enabling the efficient and effective governance of significant risks, and related opportunities, to a business and its various segments. Risks are commonly categorized as strategic, reputational, operational, financial, or compliance-related. CROs are accountable to the Executive Committee and The Board for enabling the business to balance risk and reward. In more complex organizations, they are generally responsible for coordinating the organization's Enterprise Risk Management (ERM) approach. The CRO is responsible for assessing and mitigating significant competitive,",
    "Chief risk officer a company's executive chief officer and chief financial officer to clarify the precision of its financial reports. Moreover, to ensure the mentioned accuracy of financial reports, internal controls are required. Accordingly, each financial report required an internal control report to prevent fraud. Furthermore, the CRO has to be aware of everything occurring in his company on a daily basis, but he must also be current on all of the requirements from the SEC. In addition, the CRO restrains corporate risk by managing compliance. Why is a CRO so important in financial institutions? There is a report of having a CRO",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9156

Training Details

Training Datasets

NQ

  • Dataset: NQ
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 11.86 tokens
    • max: 23 tokens
    • min: 111 tokens
    • mean: 137.85 tokens
    • max: 212 tokens
    • min: 110 tokens
    • mean: 138.8 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

pubmed

  • Dataset: pubmed
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 22.62 tokens
    • max: 50 tokens
    • min: 77 tokens
    • mean: 240.7 tokens
    • max: 256 tokens
    • min: 77 tokens
    • mean: 239.5 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

specter_train_triples

  • Dataset: specter_train_triples
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 15.41 tokens
    • max: 55 tokens
    • min: 4 tokens
    • mean: 14.07 tokens
    • max: 37 tokens
    • min: 4 tokens
    • mean: 15.69 tokens
    • max: 50 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

S2ORC_citations_abstracts

  • Dataset: S2ORC_citations_abstracts
  • Size: 99,352 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 198.24 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 207.17 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 204.86 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

fever

  • Dataset: fever
  • Size: 74,514 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 12.51 tokens
    • max: 49 tokens
    • min: 48 tokens
    • mean: 112.46 tokens
    • max: 139 tokens
    • min: 27 tokens
    • mean: 113.69 tokens
    • max: 155 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

gooaq_pairs

  • Dataset: gooaq_pairs
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 11.96 tokens
    • max: 19 tokens
    • min: 12 tokens
    • mean: 59.94 tokens
    • max: 144 tokens
    • min: 18 tokens
    • mean: 63.02 tokens
    • max: 150 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

codesearchnet

  • Dataset: codesearchnet
  • Size: 15,210 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 29.65 tokens
    • max: 156 tokens
    • min: 27 tokens
    • mean: 134.78 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 164.44 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

wikihow

  • Dataset: wikihow
  • Size: 5,070 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 8.03 tokens
    • max: 19 tokens
    • min: 4 tokens
    • mean: 44.2 tokens
    • max: 117 tokens
    • min: 10 tokens
    • mean: 36.49 tokens
    • max: 104 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

WikiAnswers

  • Dataset: WikiAnswers
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 12.77 tokens
    • max: 44 tokens
    • min: 6 tokens
    • mean: 12.89 tokens
    • max: 41 tokens
    • min: 6 tokens
    • mean: 13.36 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

eli5_question_answer

  • Dataset: eli5_question_answer
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 21.24 tokens
    • max: 69 tokens
    • min: 11 tokens
    • mean: 98.62 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 108.48 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

amazon-qa

  • Dataset: amazon-qa
  • Size: 99,352 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 22.57 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 54.48 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 62.82 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

medmcqa

  • Dataset: medmcqa
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 20.68 tokens
    • max: 174 tokens
    • min: 3 tokens
    • mean: 112.58 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 110.9 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

zeroshot

  • Dataset: zeroshot
  • Size: 15,210 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 8.55 tokens
    • max: 19 tokens
    • min: 10 tokens
    • mean: 111.81 tokens
    • max: 170 tokens
    • min: 5 tokens
    • mean: 116.53 tokens
    • max: 239 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

TriviaQA_pairs

  • Dataset: TriviaQA_pairs
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 19.77 tokens
    • max: 77 tokens
    • min: 22 tokens
    • mean: 245.04 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 233.43 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

PAQ_pairs

  • Dataset: PAQ_pairs
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 12.55 tokens
    • max: 21 tokens
    • min: 109 tokens
    • mean: 136.21 tokens
    • max: 212 tokens
    • min: 112 tokens
    • mean: 135.15 tokens
    • max: 223 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

stackexchange_duplicate_questions_title-body_title-body

  • Dataset: stackexchange_duplicate_questions_title-body_title-body
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 147.41 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 144.01 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 201.86 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

trex

  • Dataset: trex
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 9.53 tokens
    • max: 20 tokens
    • min: 18 tokens
    • mean: 102.65 tokens
    • max: 190 tokens
    • min: 26 tokens
    • mean: 117.98 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

flickr30k_captions

  • Dataset: flickr30k_captions
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 15.72 tokens
    • max: 74 tokens
    • min: 5 tokens
    • mean: 15.93 tokens
    • max: 58 tokens
    • min: 7 tokens
    • mean: 17.11 tokens
    • max: 52 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

hotpotqa

  • Dataset: hotpotqa
  • Size: 40,048 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 24.11 tokens
    • max: 130 tokens
    • min: 21 tokens
    • mean: 113.67 tokens
    • max: 160 tokens
    • min: 39 tokens
    • mean: 114.74 tokens
    • max: 189 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task671_ambigqa_text_generation

  • Dataset: task671_ambigqa_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 12.72 tokens
    • max: 26 tokens
    • min: 11 tokens
    • mean: 12.53 tokens
    • max: 23 tokens
    • min: 11 tokens
    • mean: 12.24 tokens
    • max: 19 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task061_ropes_answer_generation

  • Dataset: task061_ropes_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 117 tokens
    • mean: 210.74 tokens
    • max: 256 tokens
    • min: 117 tokens
    • mean: 210.15 tokens
    • max: 256 tokens
    • min: 119 tokens
    • mean: 212.51 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task285_imdb_answer_generation

  • Dataset: task285_imdb_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 46 tokens
    • mean: 209.59 tokens
    • max: 256 tokens
    • min: 49 tokens
    • mean: 204.57 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 209.59 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task905_hate_speech_offensive_classification

  • Dataset: task905_hate_speech_offensive_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 41.93 tokens
    • max: 164 tokens
    • min: 13 tokens
    • mean: 41.02 tokens
    • max: 198 tokens
    • min: 13 tokens
    • mean: 32.41 tokens
    • max: 135 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task566_circa_classification

  • Dataset: task566_circa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 27.86 tokens
    • max: 48 tokens
    • min: 19 tokens
    • mean: 27.24 tokens
    • max: 44 tokens
    • min: 20 tokens
    • mean: 27.52 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task184_snli_entailment_to_neutral_text_modification

  • Dataset: task184_snli_entailment_to_neutral_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 29.87 tokens
    • max: 72 tokens
    • min: 16 tokens
    • mean: 28.89 tokens
    • max: 60 tokens
    • min: 17 tokens
    • mean: 30.34 tokens
    • max: 100 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task280_stereoset_classification_stereotype_type

  • Dataset: task280_stereoset_classification_stereotype_type
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 18.47 tokens
    • max: 53 tokens
    • min: 8 tokens
    • mean: 16.93 tokens
    • max: 53 tokens
    • min: 8 tokens
    • mean: 16.85 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1599_smcalflow_classification

  • Dataset: task1599_smcalflow_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 11.31 tokens
    • max: 37 tokens
    • min: 3 tokens
    • mean: 10.56 tokens
    • max: 38 tokens
    • min: 5 tokens
    • mean: 16.28 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1384_deal_or_no_dialog_classification

  • Dataset: task1384_deal_or_no_dialog_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 59.31 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 59.78 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 58.71 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task591_sciq_answer_generation

  • Dataset: task591_sciq_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.59 tokens
    • max: 70 tokens
    • min: 7 tokens
    • mean: 17.13 tokens
    • max: 43 tokens
    • min: 6 tokens
    • mean: 16.72 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task823_peixian-rtgender_sentiment_analysis

  • Dataset: task823_peixian-rtgender_sentiment_analysis
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 56.98 tokens
    • max: 179 tokens
    • min: 16 tokens
    • mean: 59.75 tokens
    • max: 153 tokens
    • min: 14 tokens
    • mean: 60.1 tokens
    • max: 169 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task023_cosmosqa_question_generation

  • Dataset: task023_cosmosqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 78.99 tokens
    • max: 159 tokens
    • min: 34 tokens
    • mean: 80.06 tokens
    • max: 165 tokens
    • min: 35 tokens
    • mean: 79.04 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task900_freebase_qa_category_classification

  • Dataset: task900_freebase_qa_category_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.52 tokens
    • max: 88 tokens
    • min: 8 tokens
    • mean: 18.26 tokens
    • max: 62 tokens
    • min: 8 tokens
    • mean: 19.06 tokens
    • max: 69 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task924_event2mind_word_generation

  • Dataset: task924_event2mind_word_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 32.1 tokens
    • max: 64 tokens
    • min: 17 tokens
    • mean: 32.18 tokens
    • max: 70 tokens
    • min: 17 tokens
    • mean: 31.42 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task152_tomqa_find_location_easy_noise

  • Dataset: task152_tomqa_find_location_easy_noise
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 52.82 tokens
    • max: 79 tokens
    • min: 37 tokens
    • mean: 52.35 tokens
    • max: 78 tokens
    • min: 37 tokens
    • mean: 52.73 tokens
    • max: 82 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1368_healthfact_sentence_generation

  • Dataset: task1368_healthfact_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 91 tokens
    • mean: 240.74 tokens
    • max: 256 tokens
    • min: 84 tokens
    • mean: 239.62 tokens
    • max: 256 tokens
    • min: 97 tokens
    • mean: 245.07 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1661_super_glue_classification

  • Dataset: task1661_super_glue_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 140.97 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 143.09 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 142.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1187_politifact_classification

  • Dataset: task1187_politifact_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 33.14 tokens
    • max: 79 tokens
    • min: 10 tokens
    • mean: 31.38 tokens
    • max: 75 tokens
    • min: 13 tokens
    • mean: 32.0 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1728_web_nlg_data_to_text

  • Dataset: task1728_web_nlg_data_to_text
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 43.18 tokens
    • max: 152 tokens
    • min: 7 tokens
    • mean: 46.4 tokens
    • max: 152 tokens
    • min: 8 tokens
    • mean: 43.15 tokens
    • max: 152 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task112_asset_simple_sentence_identification

  • Dataset: task112_asset_simple_sentence_identification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 52.11 tokens
    • max: 136 tokens
    • min: 18 tokens
    • mean: 51.9 tokens
    • max: 144 tokens
    • min: 22 tokens
    • mean: 52.06 tokens
    • max: 114 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1340_msr_text_compression_compression

  • Dataset: task1340_msr_text_compression_compression
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 41.91 tokens
    • max: 116 tokens
    • min: 14 tokens
    • mean: 44.3 tokens
    • max: 133 tokens
    • min: 12 tokens
    • mean: 40.09 tokens
    • max: 141 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task072_abductivenli_answer_generation

  • Dataset: task072_abductivenli_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 26.79 tokens
    • max: 56 tokens
    • min: 16 tokens
    • mean: 26.15 tokens
    • max: 47 tokens
    • min: 16 tokens
    • mean: 26.43 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1504_hatexplain_answer_generation

  • Dataset: task1504_hatexplain_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 28.83 tokens
    • max: 72 tokens
    • min: 5 tokens
    • mean: 24.33 tokens
    • max: 86 tokens
    • min: 5 tokens
    • mean: 28.06 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task684_online_privacy_policy_text_information_type_generation

  • Dataset: task684_online_privacy_policy_text_information_type_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 29.89 tokens
    • max: 68 tokens
    • min: 10 tokens
    • mean: 30.11 tokens
    • max: 61 tokens
    • min: 14 tokens
    • mean: 30.07 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1290_xsum_summarization

  • Dataset: task1290_xsum_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 226.61 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 229.94 tokens
    • max: 256 tokens
    • min: 34 tokens
    • mean: 229.42 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task075_squad1.1_answer_generation

  • Dataset: task075_squad1.1_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 167.46 tokens
    • max: 256 tokens
    • min: 45 tokens
    • mean: 172.96 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 179.84 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1587_scifact_classification

  • Dataset: task1587_scifact_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 88 tokens
    • mean: 242.78 tokens
    • max: 256 tokens
    • min: 90 tokens
    • mean: 246.97 tokens
    • max: 256 tokens
    • min: 86 tokens
    • mean: 244.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task384_socialiqa_question_classification

  • Dataset: task384_socialiqa_question_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 35.43 tokens
    • max: 78 tokens
    • min: 22 tokens
    • mean: 34.43 tokens
    • max: 59 tokens
    • min: 22 tokens
    • mean: 34.63 tokens
    • max: 57 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1555_scitail_answer_generation

  • Dataset: task1555_scitail_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 36.85 tokens
    • max: 90 tokens
    • min: 18 tokens
    • mean: 36.15 tokens
    • max: 80 tokens
    • min: 18 tokens
    • mean: 36.55 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1532_daily_dialog_emotion_classification

  • Dataset: task1532_daily_dialog_emotion_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 136.46 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 140.46 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 134.53 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task239_tweetqa_answer_generation

  • Dataset: task239_tweetqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 55.93 tokens
    • max: 91 tokens
    • min: 29 tokens
    • mean: 56.54 tokens
    • max: 92 tokens
    • min: 25 tokens
    • mean: 55.95 tokens
    • max: 81 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task596_mocha_question_generation

  • Dataset: task596_mocha_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 80.84 tokens
    • max: 163 tokens
    • min: 12 tokens
    • mean: 95.19 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 45.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1411_dart_subject_identification

  • Dataset: task1411_dart_subject_identification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 14.95 tokens
    • max: 74 tokens
    • min: 6 tokens
    • mean: 14.05 tokens
    • max: 37 tokens
    • min: 6 tokens
    • mean: 14.34 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1359_numer_sense_answer_generation

  • Dataset: task1359_numer_sense_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 18.74 tokens
    • max: 30 tokens
    • min: 10 tokens
    • mean: 18.39 tokens
    • max: 33 tokens
    • min: 10 tokens
    • mean: 18.29 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task329_gap_classification

  • Dataset: task329_gap_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 40 tokens
    • mean: 123.73 tokens
    • max: 256 tokens
    • min: 62 tokens
    • mean: 127.36 tokens
    • max: 256 tokens
    • min: 58 tokens
    • mean: 128.32 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task220_rocstories_title_classification

  • Dataset: task220_rocstories_title_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 53 tokens
    • mean: 80.74 tokens
    • max: 116 tokens
    • min: 51 tokens
    • mean: 81.05 tokens
    • max: 108 tokens
    • min: 55 tokens
    • mean: 79.84 tokens
    • max: 115 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task316_crows-pairs_classification_stereotype

  • Dataset: task316_crows-pairs_classification_stereotype
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.78 tokens
    • max: 51 tokens
    • min: 7 tokens
    • mean: 18.21 tokens
    • max: 41 tokens
    • min: 7 tokens
    • mean: 19.83 tokens
    • max: 52 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task495_semeval_headline_classification

  • Dataset: task495_semeval_headline_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 24.49 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 24.19 tokens
    • max: 41 tokens
    • min: 15 tokens
    • mean: 24.2 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1168_brown_coarse_pos_tagging

  • Dataset: task1168_brown_coarse_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 43.8 tokens
    • max: 142 tokens
    • min: 12 tokens
    • mean: 43.34 tokens
    • max: 197 tokens
    • min: 12 tokens
    • mean: 44.88 tokens
    • max: 197 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task348_squad2.0_unanswerable_question_generation

  • Dataset: task348_squad2.0_unanswerable_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 152.57 tokens
    • max: 256 tokens
    • min: 38 tokens
    • mean: 161.4 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 165.55 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task049_multirc_questions_needed_to_answer

  • Dataset: task049_multirc_questions_needed_to_answer
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 174 tokens
    • mean: 252.61 tokens
    • max: 256 tokens
    • min: 169 tokens
    • mean: 252.72 tokens
    • max: 256 tokens
    • min: 178 tokens
    • mean: 252.82 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1534_daily_dialog_question_classification

  • Dataset: task1534_daily_dialog_question_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 125.62 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 130.54 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 135.15 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task322_jigsaw_classification_threat

  • Dataset: task322_jigsaw_classification_threat
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 54.41 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 61.29 tokens
    • max: 249 tokens
    • min: 6 tokens
    • mean: 61.83 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task295_semeval_2020_task4_commonsense_reasoning

  • Dataset: task295_semeval_2020_task4_commonsense_reasoning
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 45.19 tokens
    • max: 92 tokens
    • min: 25 tokens
    • mean: 45.14 tokens
    • max: 95 tokens
    • min: 25 tokens
    • mean: 44.6 tokens
    • max: 88 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task186_snli_contradiction_to_entailment_text_modification

  • Dataset: task186_snli_contradiction_to_entailment_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.16 tokens
    • max: 102 tokens
    • min: 18 tokens
    • mean: 30.23 tokens
    • max: 65 tokens
    • min: 18 tokens
    • mean: 32.18 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task034_winogrande_question_modification_object

  • Dataset: task034_winogrande_question_modification_object
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 36.34 tokens
    • max: 53 tokens
    • min: 29 tokens
    • mean: 35.6 tokens
    • max: 54 tokens
    • min: 29 tokens
    • mean: 34.88 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task160_replace_letter_in_a_sentence

  • Dataset: task160_replace_letter_in_a_sentence
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 31.98 tokens
    • max: 49 tokens
    • min: 28 tokens
    • mean: 31.78 tokens
    • max: 41 tokens
    • min: 29 tokens
    • mean: 31.79 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task469_mrqa_answer_generation

  • Dataset: task469_mrqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 182.73 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 181.46 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 184.86 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task105_story_cloze-rocstories_sentence_generation

  • Dataset: task105_story_cloze-rocstories_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 36 tokens
    • mean: 55.59 tokens
    • max: 75 tokens
    • min: 35 tokens
    • mean: 54.88 tokens
    • max: 76 tokens
    • min: 36 tokens
    • mean: 55.93 tokens
    • max: 76 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task649_race_blank_question_generation

  • Dataset: task649_race_blank_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 36 tokens
    • mean: 253.15 tokens
    • max: 256 tokens
    • min: 36 tokens
    • mean: 252.81 tokens
    • max: 256 tokens
    • min: 157 tokens
    • mean: 253.95 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1536_daily_dialog_happiness_classification

  • Dataset: task1536_daily_dialog_happiness_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 128.45 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 135.05 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 143.71 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task683_online_privacy_policy_text_purpose_answer_generation

  • Dataset: task683_online_privacy_policy_text_purpose_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 29.98 tokens
    • max: 68 tokens
    • min: 10 tokens
    • mean: 30.36 tokens
    • max: 64 tokens
    • min: 14 tokens
    • mean: 29.89 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task024_cosmosqa_answer_generation

  • Dataset: task024_cosmosqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 92.42 tokens
    • max: 176 tokens
    • min: 47 tokens
    • mean: 93.6 tokens
    • max: 174 tokens
    • min: 42 tokens
    • mean: 94.42 tokens
    • max: 183 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task584_udeps_eng_fine_pos_tagging

  • Dataset: task584_udeps_eng_fine_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 40.27 tokens
    • max: 120 tokens
    • min: 12 tokens
    • mean: 39.65 tokens
    • max: 186 tokens
    • min: 12 tokens
    • mean: 40.61 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task066_timetravel_binary_consistency_classification

  • Dataset: task066_timetravel_binary_consistency_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 66.76 tokens
    • max: 93 tokens
    • min: 43 tokens
    • mean: 67.45 tokens
    • max: 94 tokens
    • min: 45 tokens
    • mean: 66.98 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task413_mickey_en_sentence_perturbation_generation

  • Dataset: task413_mickey_en_sentence_perturbation_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 13.75 tokens
    • max: 21 tokens
    • min: 7 tokens
    • mean: 13.81 tokens
    • max: 21 tokens
    • min: 7 tokens
    • mean: 13.31 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task182_duorc_question_generation

  • Dataset: task182_duorc_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 99 tokens
    • mean: 242.3 tokens
    • max: 256 tokens
    • min: 120 tokens
    • mean: 246.33 tokens
    • max: 256 tokens
    • min: 99 tokens
    • mean: 246.42 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task028_drop_answer_generation

  • Dataset: task028_drop_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 76 tokens
    • mean: 230.65 tokens
    • max: 256 tokens
    • min: 86 tokens
    • mean: 234.71 tokens
    • max: 256 tokens
    • min: 81 tokens
    • mean: 235.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1601_webquestions_answer_generation

  • Dataset: task1601_webquestions_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 16.51 tokens
    • max: 28 tokens
    • min: 11 tokens
    • mean: 16.69 tokens
    • max: 28 tokens
    • min: 9 tokens
    • mean: 16.73 tokens
    • max: 27 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1295_adversarial_qa_question_answering

  • Dataset: task1295_adversarial_qa_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 164.89 tokens
    • max: 256 tokens
    • min: 54 tokens
    • mean: 166.37 tokens
    • max: 256 tokens
    • min: 48 tokens
    • mean: 166.85 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task201_mnli_neutral_classification

  • Dataset: task201_mnli_neutral_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 73.03 tokens
    • max: 218 tokens
    • min: 25 tokens
    • mean: 73.42 tokens
    • max: 170 tokens
    • min: 27 tokens
    • mean: 72.64 tokens
    • max: 205 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task038_qasc_combined_fact

  • Dataset: task038_qasc_combined_fact
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.27 tokens
    • max: 57 tokens
    • min: 19 tokens
    • mean: 30.52 tokens
    • max: 53 tokens
    • min: 18 tokens
    • mean: 30.84 tokens
    • max: 53 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task293_storycommonsense_emotion_text_generation

  • Dataset: task293_storycommonsense_emotion_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.0 tokens
    • max: 86 tokens
    • min: 15 tokens
    • mean: 40.18 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 37.66 tokens
    • max: 85 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task572_recipe_nlg_text_generation

  • Dataset: task572_recipe_nlg_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 114.49 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 119.68 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 124.27 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task517_emo_classify_emotion_of_dialogue

  • Dataset: task517_emo_classify_emotion_of_dialogue
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.12 tokens
    • max: 78 tokens
    • min: 7 tokens
    • mean: 17.16 tokens
    • max: 59 tokens
    • min: 7 tokens
    • mean: 18.4 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task382_hybridqa_answer_generation

  • Dataset: task382_hybridqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 42.31 tokens
    • max: 70 tokens
    • min: 29 tokens
    • mean: 41.59 tokens
    • max: 74 tokens
    • min: 28 tokens
    • mean: 41.75 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task176_break_decompose_questions

  • Dataset: task176_break_decompose_questions
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 17.43 tokens
    • max: 41 tokens
    • min: 8 tokens
    • mean: 17.21 tokens
    • max: 39 tokens
    • min: 8 tokens
    • mean: 15.73 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1291_multi_news_summarization

  • Dataset: task1291_multi_news_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 116 tokens
    • mean: 255.36 tokens
    • max: 256 tokens
    • min: 146 tokens
    • mean: 255.71 tokens
    • max: 256 tokens
    • min: 68 tokens
    • mean: 252.32 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task155_count_nouns_verbs

  • Dataset: task155_count_nouns_verbs
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 23 tokens
    • mean: 27.02 tokens
    • max: 56 tokens
    • min: 23 tokens
    • mean: 26.8 tokens
    • max: 43 tokens
    • min: 23 tokens
    • mean: 26.96 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task031_winogrande_question_generation_object

  • Dataset: task031_winogrande_question_generation_object
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.43 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.31 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.25 tokens
    • max: 11 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task279_stereoset_classification_stereotype

  • Dataset: task279_stereoset_classification_stereotype
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 17.86 tokens
    • max: 41 tokens
    • min: 8 tokens
    • mean: 15.52 tokens
    • max: 43 tokens
    • min: 8 tokens
    • mean: 17.39 tokens
    • max: 50 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1336_peixian_equity_evaluation_corpus_gender_classifier

  • Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.59 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 9.58 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.64 tokens
    • max: 16 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task508_scruples_dilemmas_more_ethical_isidentifiable

  • Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 29.67 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 28.64 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 28.71 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task518_emo_different_dialogue_emotions

  • Dataset: task518_emo_different_dialogue_emotions
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 47.83 tokens
    • max: 106 tokens
    • min: 28 tokens
    • mean: 45.5 tokens
    • max: 116 tokens
    • min: 26 tokens
    • mean: 45.83 tokens
    • max: 123 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task077_splash_explanation_to_sql

  • Dataset: task077_splash_explanation_to_sql
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 39.84 tokens
    • max: 126 tokens
    • min: 8 tokens
    • mean: 39.9 tokens
    • max: 126 tokens
    • min: 8 tokens
    • mean: 35.84 tokens
    • max: 111 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task923_event2mind_classifier

  • Dataset: task923_event2mind_classifier
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 20.63 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 18.63 tokens
    • max: 41 tokens
    • min: 11 tokens
    • mean: 19.5 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task470_mrqa_question_generation

  • Dataset: task470_mrqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 171.07 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 173.67 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 179.34 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task638_multi_woz_classification

  • Dataset: task638_multi_woz_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 78 tokens
    • mean: 223.21 tokens
    • max: 256 tokens
    • min: 76 tokens
    • mean: 220.32 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 219.78 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1412_web_questions_question_answering

  • Dataset: task1412_web_questions_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 10.32 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 10.18 tokens
    • max: 17 tokens
    • min: 6 tokens
    • mean: 10.07 tokens
    • max: 16 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task847_pubmedqa_question_generation

  • Dataset: task847_pubmedqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 249.18 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 249.32 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 249.01 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task678_ollie_actual_relationship_answer_generation

  • Dataset: task678_ollie_actual_relationship_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 40.91 tokens
    • max: 95 tokens
    • min: 19 tokens
    • mean: 38.11 tokens
    • max: 102 tokens
    • min: 18 tokens
    • mean: 41.31 tokens
    • max: 104 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task290_tellmewhy_question_answerability

  • Dataset: task290_tellmewhy_question_answerability
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 62.72 tokens
    • max: 95 tokens
    • min: 36 tokens
    • mean: 62.32 tokens
    • max: 94 tokens
    • min: 37 tokens
    • mean: 62.95 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task575_air_dialogue_classification

  • Dataset: task575_air_dialogue_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 14.19 tokens
    • max: 45 tokens
    • min: 4 tokens
    • mean: 13.59 tokens
    • max: 43 tokens
    • min: 4 tokens
    • mean: 12.31 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task189_snli_neutral_to_contradiction_text_modification

  • Dataset: task189_snli_neutral_to_contradiction_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.84 tokens
    • max: 60 tokens
    • min: 18 tokens
    • mean: 30.73 tokens
    • max: 57 tokens
    • min: 18 tokens
    • mean: 33.22 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task026_drop_question_generation

  • Dataset: task026_drop_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 82 tokens
    • mean: 219.35 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 222.81 tokens
    • max: 256 tokens
    • min: 96 tokens
    • mean: 232.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task162_count_words_starting_with_letter

  • Dataset: task162_count_words_starting_with_letter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 32.17 tokens
    • max: 56 tokens
    • min: 28 tokens
    • mean: 31.76 tokens
    • max: 45 tokens
    • min: 28 tokens
    • mean: 31.63 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task079_conala_concat_strings

  • Dataset: task079_conala_concat_strings
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 39.49 tokens
    • max: 76 tokens
    • min: 11 tokens
    • mean: 34.22 tokens
    • max: 80 tokens
    • min: 11 tokens
    • mean: 33.51 tokens
    • max: 76 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task610_conllpp_ner

  • Dataset: task610_conllpp_ner
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 19.53 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 20.3 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 14.15 tokens
    • max: 54 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task046_miscellaneous_question_typing

  • Dataset: task046_miscellaneous_question_typing
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 25.34 tokens
    • max: 70 tokens
    • min: 16 tokens
    • mean: 24.92 tokens
    • max: 70 tokens
    • min: 16 tokens
    • mean: 25.11 tokens
    • max: 57 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task197_mnli_domain_answer_generation

  • Dataset: task197_mnli_domain_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 43.91 tokens
    • max: 197 tokens
    • min: 12 tokens
    • mean: 45.21 tokens
    • max: 211 tokens
    • min: 11 tokens
    • mean: 39.5 tokens
    • max: 115 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1325_qa_zre_question_generation_on_subject_relation

  • Dataset: task1325_qa_zre_question_generation_on_subject_relation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 50.72 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 49.76 tokens
    • max: 180 tokens
    • min: 22 tokens
    • mean: 54.01 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task430_senteval_subject_count

  • Dataset: task430_senteval_subject_count
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 17.36 tokens
    • max: 35 tokens
    • min: 7 tokens
    • mean: 15.41 tokens
    • max: 34 tokens
    • min: 7 tokens
    • mean: 16.16 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task672_nummersense

  • Dataset: task672_nummersense
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.72 tokens
    • max: 30 tokens
    • min: 7 tokens
    • mean: 15.34 tokens
    • max: 27 tokens
    • min: 7 tokens
    • mean: 15.28 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task402_grailqa_paraphrase_generation

  • Dataset: task402_grailqa_paraphrase_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 23 tokens
    • mean: 130.03 tokens
    • max: 256 tokens
    • min: 24 tokens
    • mean: 139.65 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 136.9 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task904_hate_speech_offensive_classification

  • Dataset: task904_hate_speech_offensive_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 34.87 tokens
    • max: 157 tokens
    • min: 8 tokens
    • mean: 34.42 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 27.88 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task192_hotpotqa_sentence_generation

  • Dataset: task192_hotpotqa_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 125.31 tokens
    • max: 256 tokens
    • min: 35 tokens
    • mean: 124.0 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 134.28 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task069_abductivenli_classification

  • Dataset: task069_abductivenli_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 52.09 tokens
    • max: 86 tokens
    • min: 33 tokens
    • mean: 52.07 tokens
    • max: 95 tokens
    • min: 33 tokens
    • mean: 51.91 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task574_air_dialogue_sentence_generation

  • Dataset: task574_air_dialogue_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 54 tokens
    • mean: 144.27 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 143.51 tokens
    • max: 256 tokens
    • min: 66 tokens
    • mean: 147.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task187_snli_entailment_to_contradiction_text_modification

  • Dataset: task187_snli_entailment_to_contradiction_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 30.26 tokens
    • max: 69 tokens
    • min: 17 tokens
    • mean: 30.08 tokens
    • max: 104 tokens
    • min: 17 tokens
    • mean: 29.35 tokens
    • max: 71 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task749_glucose_reverse_cause_emotion_detection

  • Dataset: task749_glucose_reverse_cause_emotion_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 38 tokens
    • mean: 67.95 tokens
    • max: 106 tokens
    • min: 37 tokens
    • mean: 67.23 tokens
    • max: 104 tokens
    • min: 39 tokens
    • mean: 68.79 tokens
    • max: 107 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1552_scitail_question_generation

  • Dataset: task1552_scitail_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 18.34 tokens
    • max: 53 tokens
    • min: 7 tokens
    • mean: 17.57 tokens
    • max: 46 tokens
    • min: 7 tokens
    • mean: 15.86 tokens
    • max: 54 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task750_aqua_multiple_choice_answering

  • Dataset: task750_aqua_multiple_choice_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 70.17 tokens
    • max: 194 tokens
    • min: 32 tokens
    • mean: 68.58 tokens
    • max: 194 tokens
    • min: 28 tokens
    • mean: 68.28 tokens
    • max: 165 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task327_jigsaw_classification_toxic

  • Dataset: task327_jigsaw_classification_toxic
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 36.97 tokens
    • max: 234 tokens
    • min: 5 tokens
    • mean: 41.55 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 46.13 tokens
    • max: 244 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1502_hatexplain_classification

  • Dataset: task1502_hatexplain_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 28.81 tokens
    • max: 73 tokens
    • min: 5 tokens
    • mean: 26.8 tokens
    • max: 110 tokens
    • min: 5 tokens
    • mean: 27.25 tokens
    • max: 90 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task328_jigsaw_classification_insult

  • Dataset: task328_jigsaw_classification_insult
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 50.85 tokens
    • max: 247 tokens
    • min: 5 tokens
    • mean: 60.44 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 63.9 tokens
    • max: 249 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task304_numeric_fused_head_resolution

  • Dataset: task304_numeric_fused_head_resolution
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 121.08 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 122.16 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 135.09 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1293_kilt_tasks_hotpotqa_question_answering

  • Dataset: task1293_kilt_tasks_hotpotqa_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 24.85 tokens
    • max: 114 tokens
    • min: 9 tokens
    • mean: 24.21 tokens
    • max: 114 tokens
    • min: 8 tokens
    • mean: 23.81 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task216_rocstories_correct_answer_generation

  • Dataset: task216_rocstories_correct_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 59.48 tokens
    • max: 83 tokens
    • min: 36 tokens
    • mean: 58.43 tokens
    • max: 92 tokens
    • min: 39 tokens
    • mean: 58.2 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1326_qa_zre_question_generation_from_answer

  • Dataset: task1326_qa_zre_question_generation_from_answer
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 46.64 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 45.58 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 49.45 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1338_peixian_equity_evaluation_corpus_sentiment_classifier

  • Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.69 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.7 tokens
    • max: 16 tokens
    • min: 6 tokens
    • mean: 9.59 tokens
    • max: 17 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1729_personachat_generate_next

  • Dataset: task1729_personachat_generate_next
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 146.83 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 142.94 tokens
    • max: 256 tokens
    • min: 50 tokens
    • mean: 144.69 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1202_atomic_classification_xneed

  • Dataset: task1202_atomic_classification_xneed
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 19.56 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 19.38 tokens
    • max: 31 tokens
    • min: 14 tokens
    • mean: 19.24 tokens
    • max: 28 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task400_paws_paraphrase_classification

  • Dataset: task400_paws_paraphrase_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 52.16 tokens
    • max: 97 tokens
    • min: 18 tokens
    • mean: 51.75 tokens
    • max: 98 tokens
    • min: 19 tokens
    • mean: 52.95 tokens
    • max: 97 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task502_scruples_anecdotes_whoiswrong_verification

  • Dataset: task502_scruples_anecdotes_whoiswrong_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 229.88 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 236.97 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 235.34 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task088_identify_typo_verification

  • Dataset: task088_identify_typo_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 15.1 tokens
    • max: 48 tokens
    • min: 10 tokens
    • mean: 15.06 tokens
    • max: 47 tokens
    • min: 10 tokens
    • mean: 15.41 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task221_rocstories_two_choice_classification

  • Dataset: task221_rocstories_two_choice_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 47 tokens
    • mean: 72.64 tokens
    • max: 108 tokens
    • min: 48 tokens
    • mean: 72.56 tokens
    • max: 109 tokens
    • min: 46 tokens
    • mean: 73.23 tokens
    • max: 108 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task200_mnli_entailment_classification

  • Dataset: task200_mnli_entailment_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 72.66 tokens
    • max: 198 tokens
    • min: 23 tokens
    • mean: 72.92 tokens
    • max: 224 tokens
    • min: 23 tokens
    • mean: 73.48 tokens
    • max: 226 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task074_squad1.1_question_generation

  • Dataset: task074_squad1.1_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 149.61 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 160.64 tokens
    • max: 256 tokens
    • min: 38 tokens
    • mean: 164.94 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task581_socialiqa_question_generation

  • Dataset: task581_socialiqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 26.47 tokens
    • max: 69 tokens
    • min: 14 tokens
    • mean: 25.5 tokens
    • max: 48 tokens
    • min: 15 tokens
    • mean: 25.89 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1186_nne_hrngo_classification

  • Dataset: task1186_nne_hrngo_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 33.83 tokens
    • max: 79 tokens
    • min: 19 tokens
    • mean: 33.53 tokens
    • max: 74 tokens
    • min: 20 tokens
    • mean: 33.3 tokens
    • max: 77 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task898_freebase_qa_answer_generation

  • Dataset: task898_freebase_qa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.18 tokens
    • max: 125 tokens
    • min: 8 tokens
    • mean: 17.45 tokens
    • max: 49 tokens
    • min: 8 tokens
    • mean: 17.4 tokens
    • max: 79 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1408_dart_similarity_classification

  • Dataset: task1408_dart_similarity_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 59.53 tokens
    • max: 147 tokens
    • min: 22 tokens
    • mean: 61.93 tokens
    • max: 154 tokens
    • min: 20 tokens
    • mean: 48.83 tokens
    • max: 124 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task168_strategyqa_question_decomposition

  • Dataset: task168_strategyqa_question_decomposition
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 80.63 tokens
    • max: 181 tokens
    • min: 42 tokens
    • mean: 78.98 tokens
    • max: 179 tokens
    • min: 42 tokens
    • mean: 77.19 tokens
    • max: 166 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1357_xlsum_summary_generation

  • Dataset: task1357_xlsum_summary_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 67 tokens
    • mean: 241.86 tokens
    • max: 256 tokens
    • min: 69 tokens
    • mean: 242.71 tokens
    • max: 256 tokens
    • min: 67 tokens
    • mean: 247.11 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task390_torque_text_span_selection

  • Dataset: task390_torque_text_span_selection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 47 tokens
    • mean: 110.01 tokens
    • max: 196 tokens
    • min: 42 tokens
    • mean: 110.44 tokens
    • max: 195 tokens
    • min: 48 tokens
    • mean: 110.66 tokens
    • max: 196 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task165_mcscript_question_answering_commonsense

  • Dataset: task165_mcscript_question_answering_commonsense
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 147 tokens
    • mean: 197.75 tokens
    • max: 256 tokens
    • min: 145 tokens
    • mean: 196.42 tokens
    • max: 256 tokens
    • min: 147 tokens
    • mean: 198.04 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1533_daily_dialog_formal_classification

  • Dataset: task1533_daily_dialog_formal_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 130.14 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 136.79 tokens
    • max: 256 tokens
    • min: 17 tokens
    • mean: 136.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task002_quoref_answer_generation

  • Dataset: task002_quoref_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 214 tokens
    • mean: 255.53 tokens
    • max: 256 tokens
    • min: 214 tokens
    • mean: 255.54 tokens
    • max: 256 tokens
    • min: 224 tokens
    • mean: 255.61 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1297_qasc_question_answering

  • Dataset: task1297_qasc_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 61 tokens
    • mean: 84.74 tokens
    • max: 134 tokens
    • min: 59 tokens
    • mean: 85.41 tokens
    • max: 130 tokens
    • min: 58 tokens
    • mean: 84.83 tokens
    • max: 125 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task305_jeopardy_answer_generation_normal

  • Dataset: task305_jeopardy_answer_generation_normal
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 27.67 tokens
    • max: 59 tokens
    • min: 9 tokens
    • mean: 27.39 tokens
    • max: 45 tokens
    • min: 11 tokens
    • mean: 27.41 tokens
    • max: 46 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task029_winogrande_full_object

  • Dataset: task029_winogrande_full_object
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.37 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.33 tokens
    • max: 11 tokens
    • min: 7 tokens
    • mean: 7.24 tokens
    • max: 10 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1327_qa_zre_answer_generation_from_question

  • Dataset: task1327_qa_zre_answer_generation_from_question
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 54.91 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 52.08 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 55.5 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task326_jigsaw_classification_obscene

  • Dataset: task326_jigsaw_classification_obscene
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 65.2 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 77.26 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 73.17 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1542_every_ith_element_from_starting

  • Dataset: task1542_every_ith_element_from_starting
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 127.39 tokens
    • max: 245 tokens
    • min: 13 tokens
    • mean: 125.92 tokens
    • max: 244 tokens
    • min: 13 tokens
    • mean: 123.75 tokens
    • max: 238 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task570_recipe_nlg_ner_generation

  • Dataset: task570_recipe_nlg_ner_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 73.94 tokens
    • max: 250 tokens
    • min: 5 tokens
    • mean: 73.35 tokens
    • max: 256 tokens
    • min: 8 tokens
    • mean: 75.51 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1409_dart_text_generation

  • Dataset: task1409_dart_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 68.05 tokens
    • max: 174 tokens
    • min: 18 tokens
    • mean: 72.93 tokens
    • max: 170 tokens
    • min: 17 tokens
    • mean: 68.0 tokens
    • max: 164 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task401_numeric_fused_head_reference

  • Dataset: task401_numeric_fused_head_reference
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 109.26 tokens
    • max: 256 tokens
    • min: 16 tokens
    • mean: 117.92 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 119.84 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task846_pubmedqa_classification

  • Dataset: task846_pubmedqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 32 tokens
    • mean: 85.64 tokens
    • max: 246 tokens
    • min: 33 tokens
    • mean: 85.03 tokens
    • max: 225 tokens
    • min: 28 tokens
    • mean: 93.96 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1712_poki_classification

  • Dataset: task1712_poki_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 52.23 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 55.08 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 63.09 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task344_hybridqa_answer_generation

  • Dataset: task344_hybridqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 22.26 tokens
    • max: 50 tokens
    • min: 8 tokens
    • mean: 22.14 tokens
    • max: 58 tokens
    • min: 7 tokens
    • mean: 22.01 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task875_emotion_classification

  • Dataset: task875_emotion_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 23.04 tokens
    • max: 75 tokens
    • min: 4 tokens
    • mean: 18.43 tokens
    • max: 63 tokens
    • min: 5 tokens
    • mean: 20.33 tokens
    • max: 68 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1214_atomic_classification_xwant

  • Dataset: task1214_atomic_classification_xwant
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 19.65 tokens
    • max: 32 tokens
    • min: 14 tokens
    • mean: 19.44 tokens
    • max: 29 tokens
    • min: 14 tokens
    • mean: 19.51 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task106_scruples_ethical_judgment

  • Dataset: task106_scruples_ethical_judgment
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 30.0 tokens
    • max: 70 tokens
    • min: 14 tokens
    • mean: 28.93 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 28.69 tokens
    • max: 58 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task238_iirc_answer_from_passage_answer_generation

  • Dataset: task238_iirc_answer_from_passage_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 138 tokens
    • mean: 242.84 tokens
    • max: 256 tokens
    • min: 165 tokens
    • mean: 242.64 tokens
    • max: 256 tokens
    • min: 173 tokens
    • mean: 243.38 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1391_winogrande_easy_answer_generation

  • Dataset: task1391_winogrande_easy_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 31.7 tokens
    • max: 54 tokens
    • min: 26 tokens
    • mean: 31.3 tokens
    • max: 48 tokens
    • min: 25 tokens
    • mean: 31.2 tokens
    • max: 49 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task195_sentiment140_classification

  • Dataset: task195_sentiment140_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 22.51 tokens
    • max: 118 tokens
    • min: 4 tokens
    • mean: 18.98 tokens
    • max: 79 tokens
    • min: 5 tokens
    • mean: 21.42 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task163_count_words_ending_with_letter

  • Dataset: task163_count_words_ending_with_letter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 31.97 tokens
    • max: 54 tokens
    • min: 28 tokens
    • mean: 31.7 tokens
    • max: 57 tokens
    • min: 28 tokens
    • mean: 31.57 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task579_socialiqa_classification

  • Dataset: task579_socialiqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 54.15 tokens
    • max: 132 tokens
    • min: 36 tokens
    • mean: 53.63 tokens
    • max: 103 tokens
    • min: 40 tokens
    • mean: 54.12 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task569_recipe_nlg_text_generation

  • Dataset: task569_recipe_nlg_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 192.7 tokens
    • max: 256 tokens
    • min: 55 tokens
    • mean: 194.02 tokens
    • max: 256 tokens
    • min: 37 tokens
    • mean: 198.01 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1602_webquestion_question_genreation

  • Dataset: task1602_webquestion_question_genreation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 23.59 tokens
    • max: 112 tokens
    • min: 12 tokens
    • mean: 24.18 tokens
    • max: 112 tokens
    • min: 12 tokens
    • mean: 22.52 tokens
    • max: 120 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task747_glucose_cause_emotion_detection

  • Dataset: task747_glucose_cause_emotion_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 67.95 tokens
    • max: 112 tokens
    • min: 36 tokens
    • mean: 68.16 tokens
    • max: 108 tokens
    • min: 36 tokens
    • mean: 68.84 tokens
    • max: 99 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task219_rocstories_title_answer_generation

  • Dataset: task219_rocstories_title_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 42 tokens
    • mean: 67.65 tokens
    • max: 97 tokens
    • min: 45 tokens
    • mean: 66.72 tokens
    • max: 97 tokens
    • min: 41 tokens
    • mean: 66.88 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task178_quartz_question_answering

  • Dataset: task178_quartz_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 28 tokens
    • mean: 57.99 tokens
    • max: 110 tokens
    • min: 28 tokens
    • mean: 57.21 tokens
    • max: 111 tokens
    • min: 28 tokens
    • mean: 56.85 tokens
    • max: 102 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task103_facts2story_long_text_generation

  • Dataset: task103_facts2story_long_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 52 tokens
    • mean: 80.5 tokens
    • max: 143 tokens
    • min: 51 tokens
    • mean: 82.19 tokens
    • max: 157 tokens
    • min: 49 tokens
    • mean: 78.93 tokens
    • max: 145 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task301_record_question_generation

  • Dataset: task301_record_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 140 tokens
    • mean: 210.92 tokens
    • max: 256 tokens
    • min: 139 tokens
    • mean: 209.8 tokens
    • max: 256 tokens
    • min: 143 tokens
    • mean: 208.87 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1369_healthfact_sentence_generation

  • Dataset: task1369_healthfact_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 110 tokens
    • mean: 243.09 tokens
    • max: 256 tokens
    • min: 101 tokens
    • mean: 243.16 tokens
    • max: 256 tokens
    • min: 113 tokens
    • mean: 251.69 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task515_senteval_odd_word_out

  • Dataset: task515_senteval_odd_word_out
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 19.82 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 19.22 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 19.02 tokens
    • max: 35 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task496_semeval_answer_generation

  • Dataset: task496_semeval_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 28.16 tokens
    • max: 46 tokens
    • min: 18 tokens
    • mean: 27.78 tokens
    • max: 45 tokens
    • min: 19 tokens
    • mean: 27.71 tokens
    • max: 45 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1658_billsum_summarization

  • Dataset: task1658_billsum_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1204_atomic_classification_hinderedby

  • Dataset: task1204_atomic_classification_hinderedby
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 22.08 tokens
    • max: 35 tokens
    • min: 14 tokens
    • mean: 22.05 tokens
    • max: 34 tokens
    • min: 14 tokens
    • mean: 21.51 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1392_superglue_multirc_answer_verification

  • Dataset: task1392_superglue_multirc_answer_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 128 tokens
    • mean: 241.67 tokens
    • max: 256 tokens
    • min: 127 tokens
    • mean: 241.96 tokens
    • max: 256 tokens
    • min: 136 tokens
    • mean: 242.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task306_jeopardy_answer_generation_double

  • Dataset: task306_jeopardy_answer_generation_double
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 27.86 tokens
    • max: 47 tokens
    • min: 10 tokens
    • mean: 27.16 tokens
    • max: 46 tokens
    • min: 11 tokens
    • mean: 27.47 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1286_openbookqa_question_answering

  • Dataset: task1286_openbookqa_question_answering
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 39.61 tokens
    • max: 85 tokens
    • min: 23 tokens
    • mean: 38.96 tokens
    • max: 96 tokens
    • min: 22 tokens
    • mean: 38.35 tokens
    • max: 89 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task159_check_frequency_of_words_in_sentence_pair

  • Dataset: task159_check_frequency_of_words_in_sentence_pair
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 50.41 tokens
    • max: 67 tokens
    • min: 44 tokens
    • mean: 50.35 tokens
    • max: 67 tokens
    • min: 44 tokens
    • mean: 50.59 tokens
    • max: 66 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task151_tomqa_find_location_easy_clean

  • Dataset: task151_tomqa_find_location_easy_clean
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 37 tokens
    • mean: 50.74 tokens
    • max: 79 tokens
    • min: 37 tokens
    • mean: 50.23 tokens
    • max: 74 tokens
    • min: 37 tokens
    • mean: 50.66 tokens
    • max: 74 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task323_jigsaw_classification_sexually_explicit

  • Dataset: task323_jigsaw_classification_sexually_explicit
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 66.2 tokens
    • max: 248 tokens
    • min: 5 tokens
    • mean: 76.82 tokens
    • max: 248 tokens
    • min: 6 tokens
    • mean: 75.6 tokens
    • max: 251 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task037_qasc_generate_related_fact

  • Dataset: task037_qasc_generate_related_fact
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 22.08 tokens
    • max: 50 tokens
    • min: 13 tokens
    • mean: 22.07 tokens
    • max: 42 tokens
    • min: 13 tokens
    • mean: 21.88 tokens
    • max: 40 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task027_drop_answer_type_generation

  • Dataset: task027_drop_answer_type_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 87 tokens
    • mean: 229.31 tokens
    • max: 256 tokens
    • min: 74 tokens
    • mean: 230.61 tokens
    • max: 256 tokens
    • min: 71 tokens
    • mean: 232.72 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1596_event2mind_text_generation_2

  • Dataset: task1596_event2mind_text_generation_2
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 10.0 tokens
    • max: 18 tokens
    • min: 6 tokens
    • mean: 10.04 tokens
    • max: 19 tokens
    • min: 6 tokens
    • mean: 10.04 tokens
    • max: 18 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task141_odd-man-out_classification_category

  • Dataset: task141_odd-man-out_classification_category
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 18.43 tokens
    • max: 28 tokens
    • min: 16 tokens
    • mean: 18.37 tokens
    • max: 26 tokens
    • min: 16 tokens
    • mean: 18.45 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task194_duorc_answer_generation

  • Dataset: task194_duorc_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 149 tokens
    • mean: 251.8 tokens
    • max: 256 tokens
    • min: 147 tokens
    • mean: 252.1 tokens
    • max: 256 tokens
    • min: 148 tokens
    • mean: 251.81 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task679_hope_edi_english_text_classification

  • Dataset: task679_hope_edi_english_text_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 27.62 tokens
    • max: 199 tokens
    • min: 4 tokens
    • mean: 27.01 tokens
    • max: 205 tokens
    • min: 5 tokens
    • mean: 29.68 tokens
    • max: 194 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task246_dream_question_generation

  • Dataset: task246_dream_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 80.01 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 80.34 tokens
    • max: 256 tokens
    • min: 15 tokens
    • mean: 86.98 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1195_disflqa_disfluent_to_fluent_conversion

  • Dataset: task1195_disflqa_disfluent_to_fluent_conversion
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 19.79 tokens
    • max: 41 tokens
    • min: 9 tokens
    • mean: 19.84 tokens
    • max: 40 tokens
    • min: 2 tokens
    • mean: 20.05 tokens
    • max: 44 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task065_timetravel_consistent_sentence_classification

  • Dataset: task065_timetravel_consistent_sentence_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 55 tokens
    • mean: 79.44 tokens
    • max: 117 tokens
    • min: 51 tokens
    • mean: 79.28 tokens
    • max: 110 tokens
    • min: 53 tokens
    • mean: 80.05 tokens
    • max: 110 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task351_winomt_classification_gender_identifiability_anti

  • Dataset: task351_winomt_classification_gender_identifiability_anti
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 21.8 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.7 tokens
    • max: 31 tokens
    • min: 16 tokens
    • mean: 21.83 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task580_socialiqa_answer_generation

  • Dataset: task580_socialiqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 52.36 tokens
    • max: 107 tokens
    • min: 35 tokens
    • mean: 51.03 tokens
    • max: 86 tokens
    • min: 35 tokens
    • mean: 51.01 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task583_udeps_eng_coarse_pos_tagging

  • Dataset: task583_udeps_eng_coarse_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 40.75 tokens
    • max: 185 tokens
    • min: 12 tokens
    • mean: 39.87 tokens
    • max: 185 tokens
    • min: 12 tokens
    • mean: 40.43 tokens
    • max: 185 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task202_mnli_contradiction_classification

  • Dataset: task202_mnli_contradiction_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 73.61 tokens
    • max: 190 tokens
    • min: 28 tokens
    • mean: 76.12 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 74.47 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task222_rocstories_two_chioce_slotting_classification

  • Dataset: task222_rocstories_two_chioce_slotting_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 73.08 tokens
    • max: 105 tokens
    • min: 48 tokens
    • mean: 73.29 tokens
    • max: 100 tokens
    • min: 49 tokens
    • mean: 71.96 tokens
    • max: 102 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task498_scruples_anecdotes_whoiswrong_classification

  • Dataset: task498_scruples_anecdotes_whoiswrong_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 225.81 tokens
    • max: 256 tokens
    • min: 47 tokens
    • mean: 231.81 tokens
    • max: 256 tokens
    • min: 47 tokens
    • mean: 231.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task067_abductivenli_answer_generation

  • Dataset: task067_abductivenli_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 26.76 tokens
    • max: 40 tokens
    • min: 14 tokens
    • mean: 26.09 tokens
    • max: 42 tokens
    • min: 15 tokens
    • mean: 26.35 tokens
    • max: 38 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task616_cola_classification

  • Dataset: task616_cola_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 12.44 tokens
    • max: 33 tokens
    • min: 5 tokens
    • mean: 12.29 tokens
    • max: 33 tokens
    • min: 6 tokens
    • mean: 12.16 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task286_olid_offense_judgment

  • Dataset: task286_olid_offense_judgment
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 32.73 tokens
    • max: 145 tokens
    • min: 5 tokens
    • mean: 30.79 tokens
    • max: 171 tokens
    • min: 5 tokens
    • mean: 30.27 tokens
    • max: 169 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task188_snli_neutral_to_entailment_text_modification

  • Dataset: task188_snli_neutral_to_entailment_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 31.76 tokens
    • max: 79 tokens
    • min: 18 tokens
    • mean: 31.25 tokens
    • max: 84 tokens
    • min: 18 tokens
    • mean: 33.02 tokens
    • max: 84 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task223_quartz_explanation_generation

  • Dataset: task223_quartz_explanation_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 31.41 tokens
    • max: 68 tokens
    • min: 13 tokens
    • mean: 31.77 tokens
    • max: 68 tokens
    • min: 13 tokens
    • mean: 28.98 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task820_protoqa_answer_generation

  • Dataset: task820_protoqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 14.71 tokens
    • max: 29 tokens
    • min: 7 tokens
    • mean: 14.49 tokens
    • max: 27 tokens
    • min: 6 tokens
    • mean: 14.15 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task196_sentiment140_answer_generation

  • Dataset: task196_sentiment140_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 36.21 tokens
    • max: 72 tokens
    • min: 17 tokens
    • mean: 32.8 tokens
    • max: 61 tokens
    • min: 17 tokens
    • mean: 36.21 tokens
    • max: 72 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1678_mathqa_answer_selection

  • Dataset: task1678_mathqa_answer_selection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 70.5 tokens
    • max: 177 tokens
    • min: 30 tokens
    • mean: 69.11 tokens
    • max: 146 tokens
    • min: 33 tokens
    • mean: 69.75 tokens
    • max: 160 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task349_squad2.0_answerable_unanswerable_question_classification

  • Dataset: task349_squad2.0_answerable_unanswerable_question_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 53 tokens
    • mean: 175.5 tokens
    • max: 256 tokens
    • min: 57 tokens
    • mean: 175.71 tokens
    • max: 256 tokens
    • min: 53 tokens
    • mean: 175.37 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task154_tomqa_find_location_hard_noise

  • Dataset: task154_tomqa_find_location_hard_noise
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 129 tokens
    • mean: 176.0 tokens
    • max: 253 tokens
    • min: 126 tokens
    • mean: 176.09 tokens
    • max: 249 tokens
    • min: 128 tokens
    • mean: 177.44 tokens
    • max: 254 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task333_hateeval_classification_hate_en

  • Dataset: task333_hateeval_classification_hate_en
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 38.53 tokens
    • max: 117 tokens
    • min: 7 tokens
    • mean: 37.38 tokens
    • max: 109 tokens
    • min: 7 tokens
    • mean: 36.64 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task235_iirc_question_from_subtext_answer_generation

  • Dataset: task235_iirc_question_from_subtext_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 52.74 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 50.73 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 55.69 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1554_scitail_classification

  • Dataset: task1554_scitail_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.69 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 25.79 tokens
    • max: 68 tokens
    • min: 8 tokens
    • mean: 24.42 tokens
    • max: 59 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task210_logic2text_structured_text_generation

  • Dataset: task210_logic2text_structured_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 31.62 tokens
    • max: 101 tokens
    • min: 13 tokens
    • mean: 30.74 tokens
    • max: 94 tokens
    • min: 12 tokens
    • mean: 32.72 tokens
    • max: 89 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task035_winogrande_question_modification_person

  • Dataset: task035_winogrande_question_modification_person
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 31 tokens
    • mean: 36.19 tokens
    • max: 50 tokens
    • min: 31 tokens
    • mean: 35.74 tokens
    • max: 55 tokens
    • min: 31 tokens
    • mean: 35.48 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task230_iirc_passage_classification

  • Dataset: task230_iirc_passage_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
    • min: 256 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1356_xlsum_title_generation

  • Dataset: task1356_xlsum_title_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 59 tokens
    • mean: 240.0 tokens
    • max: 256 tokens
    • min: 58 tokens
    • mean: 241.02 tokens
    • max: 256 tokens
    • min: 64 tokens
    • mean: 248.67 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1726_mathqa_correct_answer_generation

  • Dataset: task1726_mathqa_correct_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 44.19 tokens
    • max: 156 tokens
    • min: 12 tokens
    • mean: 42.51 tokens
    • max: 129 tokens
    • min: 11 tokens
    • mean: 43.3 tokens
    • max: 133 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task302_record_classification

  • Dataset: task302_record_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 194 tokens
    • mean: 253.34 tokens
    • max: 256 tokens
    • min: 198 tokens
    • mean: 252.96 tokens
    • max: 256 tokens
    • min: 195 tokens
    • mean: 252.92 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task380_boolq_yes_no_question

  • Dataset: task380_boolq_yes_no_question
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 133.82 tokens
    • max: 256 tokens
    • min: 26 tokens
    • mean: 138.28 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 137.7 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task212_logic2text_classification

  • Dataset: task212_logic2text_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 33.08 tokens
    • max: 146 tokens
    • min: 14 tokens
    • mean: 32.04 tokens
    • max: 146 tokens
    • min: 14 tokens
    • mean: 33.02 tokens
    • max: 127 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task748_glucose_reverse_cause_event_detection

  • Dataset: task748_glucose_reverse_cause_event_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 35 tokens
    • mean: 67.7 tokens
    • max: 105 tokens
    • min: 38 tokens
    • mean: 67.03 tokens
    • max: 106 tokens
    • min: 39 tokens
    • mean: 68.84 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task834_mathdataset_classification

  • Dataset: task834_mathdataset_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 27.58 tokens
    • max: 83 tokens
    • min: 6 tokens
    • mean: 27.78 tokens
    • max: 83 tokens
    • min: 5 tokens
    • mean: 26.82 tokens
    • max: 93 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task350_winomt_classification_gender_identifiability_pro

  • Dataset: task350_winomt_classification_gender_identifiability_pro
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 21.79 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.63 tokens
    • max: 30 tokens
    • min: 16 tokens
    • mean: 21.79 tokens
    • max: 30 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task191_hotpotqa_question_generation

  • Dataset: task191_hotpotqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 198 tokens
    • mean: 255.88 tokens
    • max: 256 tokens
    • min: 238 tokens
    • mean: 255.93 tokens
    • max: 256 tokens
    • min: 255 tokens
    • mean: 256.0 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task236_iirc_question_from_passage_answer_generation

  • Dataset: task236_iirc_question_from_passage_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 135 tokens
    • mean: 238.2 tokens
    • max: 256 tokens
    • min: 155 tokens
    • mean: 237.46 tokens
    • max: 256 tokens
    • min: 154 tokens
    • mean: 239.59 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task217_rocstories_ordering_answer_generation

  • Dataset: task217_rocstories_ordering_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 45 tokens
    • mean: 72.45 tokens
    • max: 107 tokens
    • min: 48 tokens
    • mean: 72.26 tokens
    • max: 107 tokens
    • min: 48 tokens
    • mean: 71.03 tokens
    • max: 105 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task568_circa_question_generation

  • Dataset: task568_circa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 9.57 tokens
    • max: 25 tokens
    • min: 4 tokens
    • mean: 9.53 tokens
    • max: 20 tokens
    • min: 4 tokens
    • mean: 8.93 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task614_glucose_cause_event_detection

  • Dataset: task614_glucose_cause_event_detection
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 67.7 tokens
    • max: 102 tokens
    • min: 39 tokens
    • mean: 67.16 tokens
    • max: 106 tokens
    • min: 38 tokens
    • mean: 68.55 tokens
    • max: 103 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task361_spolin_yesand_prompt_response_classification

  • Dataset: task361_spolin_yesand_prompt_response_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 47.04 tokens
    • max: 137 tokens
    • min: 17 tokens
    • mean: 45.97 tokens
    • max: 119 tokens
    • min: 17 tokens
    • mean: 47.1 tokens
    • max: 128 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task421_persent_sentence_sentiment_classification

  • Dataset: task421_persent_sentence_sentiment_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 67.68 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 71.41 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 72.33 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task203_mnli_sentence_generation

  • Dataset: task203_mnli_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 39.1 tokens
    • max: 175 tokens
    • min: 14 tokens
    • mean: 35.55 tokens
    • max: 175 tokens
    • min: 13 tokens
    • mean: 34.25 tokens
    • max: 170 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task420_persent_document_sentiment_classification

  • Dataset: task420_persent_document_sentiment_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 221.62 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 233.37 tokens
    • max: 256 tokens
    • min: 22 tokens
    • mean: 227.57 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task153_tomqa_find_location_hard_clean

  • Dataset: task153_tomqa_find_location_hard_clean
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 39 tokens
    • mean: 161.41 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 160.84 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 164.12 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task346_hybridqa_classification

  • Dataset: task346_hybridqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 32.88 tokens
    • max: 68 tokens
    • min: 18 tokens
    • mean: 31.94 tokens
    • max: 63 tokens
    • min: 19 tokens
    • mean: 31.91 tokens
    • max: 75 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1211_atomic_classification_hassubevent

  • Dataset: task1211_atomic_classification_hassubevent
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 16.28 tokens
    • max: 31 tokens
    • min: 11 tokens
    • mean: 16.08 tokens
    • max: 29 tokens
    • min: 11 tokens
    • mean: 16.83 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task360_spolin_yesand_response_generation

  • Dataset: task360_spolin_yesand_response_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 22.53 tokens
    • max: 89 tokens
    • min: 6 tokens
    • mean: 21.05 tokens
    • max: 92 tokens
    • min: 7 tokens
    • mean: 20.8 tokens
    • max: 67 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task510_reddit_tifu_title_summarization

  • Dataset: task510_reddit_tifu_title_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 217.71 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 218.18 tokens
    • max: 256 tokens
    • min: 10 tokens
    • mean: 222.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task511_reddit_tifu_long_text_summarization

  • Dataset: task511_reddit_tifu_long_text_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 239.27 tokens
    • max: 256 tokens
    • min: 76 tokens
    • mean: 238.8 tokens
    • max: 256 tokens
    • min: 43 tokens
    • mean: 245.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task345_hybridqa_answer_generation

  • Dataset: task345_hybridqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 22.16 tokens
    • max: 50 tokens
    • min: 10 tokens
    • mean: 21.62 tokens
    • max: 70 tokens
    • min: 8 tokens
    • mean: 20.91 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task270_csrg_counterfactual_context_generation

  • Dataset: task270_csrg_counterfactual_context_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 63 tokens
    • mean: 100.09 tokens
    • max: 158 tokens
    • min: 63 tokens
    • mean: 98.76 tokens
    • max: 142 tokens
    • min: 62 tokens
    • mean: 100.29 tokens
    • max: 141 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task307_jeopardy_answer_generation_final

  • Dataset: task307_jeopardy_answer_generation_final
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 29.55 tokens
    • max: 46 tokens
    • min: 15 tokens
    • mean: 29.3 tokens
    • max: 53 tokens
    • min: 15 tokens
    • mean: 29.25 tokens
    • max: 43 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task001_quoref_question_generation

  • Dataset: task001_quoref_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 201 tokens
    • mean: 254.96 tokens
    • max: 256 tokens
    • min: 99 tokens
    • mean: 254.24 tokens
    • max: 256 tokens
    • min: 173 tokens
    • mean: 255.09 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task089_swap_words_verification

  • Dataset: task089_swap_words_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 12.86 tokens
    • max: 28 tokens
    • min: 9 tokens
    • mean: 12.63 tokens
    • max: 24 tokens
    • min: 9 tokens
    • mean: 12.25 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1196_atomic_classification_oeffect

  • Dataset: task1196_atomic_classification_oeffect
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 18.78 tokens
    • max: 41 tokens
    • min: 14 tokens
    • mean: 18.57 tokens
    • max: 30 tokens
    • min: 14 tokens
    • mean: 18.51 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task080_piqa_answer_generation

  • Dataset: task080_piqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 10.85 tokens
    • max: 33 tokens
    • min: 3 tokens
    • mean: 10.75 tokens
    • max: 24 tokens
    • min: 3 tokens
    • mean: 10.12 tokens
    • max: 26 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1598_nyc_long_text_generation

  • Dataset: task1598_nyc_long_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 35.49 tokens
    • max: 56 tokens
    • min: 17 tokens
    • mean: 35.61 tokens
    • max: 56 tokens
    • min: 20 tokens
    • mean: 36.63 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task240_tweetqa_question_generation

  • Dataset: task240_tweetqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 51.08 tokens
    • max: 94 tokens
    • min: 25 tokens
    • mean: 50.61 tokens
    • max: 92 tokens
    • min: 20 tokens
    • mean: 51.58 tokens
    • max: 95 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task615_moviesqa_answer_generation

  • Dataset: task615_moviesqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 11.45 tokens
    • max: 23 tokens
    • min: 7 tokens
    • mean: 11.43 tokens
    • max: 19 tokens
    • min: 5 tokens
    • mean: 11.37 tokens
    • max: 21 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1347_glue_sts-b_similarity_classification

  • Dataset: task1347_glue_sts-b_similarity_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 17 tokens
    • mean: 31.15 tokens
    • max: 88 tokens
    • min: 16 tokens
    • mean: 31.1 tokens
    • max: 92 tokens
    • min: 16 tokens
    • mean: 30.97 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task114_is_the_given_word_longest

  • Dataset: task114_is_the_given_word_longest
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 28.84 tokens
    • max: 68 tokens
    • min: 25 tokens
    • mean: 28.47 tokens
    • max: 48 tokens
    • min: 25 tokens
    • mean: 28.72 tokens
    • max: 47 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task292_storycommonsense_character_text_generation

  • Dataset: task292_storycommonsense_character_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 43 tokens
    • mean: 67.9 tokens
    • max: 98 tokens
    • min: 46 tokens
    • mean: 67.11 tokens
    • max: 104 tokens
    • min: 43 tokens
    • mean: 69.09 tokens
    • max: 96 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task115_help_advice_classification

  • Dataset: task115_help_advice_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 2 tokens
    • mean: 19.92 tokens
    • max: 91 tokens
    • min: 3 tokens
    • mean: 18.28 tokens
    • max: 92 tokens
    • min: 4 tokens
    • mean: 19.23 tokens
    • max: 137 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task431_senteval_object_count

  • Dataset: task431_senteval_object_count
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.77 tokens
    • max: 37 tokens
    • min: 7 tokens
    • mean: 15.16 tokens
    • max: 36 tokens
    • min: 7 tokens
    • mean: 15.77 tokens
    • max: 35 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1360_numer_sense_multiple_choice_qa_generation

  • Dataset: task1360_numer_sense_multiple_choice_qa_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 32 tokens
    • mean: 40.71 tokens
    • max: 54 tokens
    • min: 32 tokens
    • mean: 40.36 tokens
    • max: 53 tokens
    • min: 32 tokens
    • mean: 40.32 tokens
    • max: 60 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task177_para-nmt_paraphrasing

  • Dataset: task177_para-nmt_paraphrasing
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 19.93 tokens
    • max: 82 tokens
    • min: 9 tokens
    • mean: 18.97 tokens
    • max: 58 tokens
    • min: 9 tokens
    • mean: 18.26 tokens
    • max: 36 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task132_dais_text_modification

  • Dataset: task132_dais_text_modification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.33 tokens
    • max: 15 tokens
    • min: 6 tokens
    • mean: 9.07 tokens
    • max: 15 tokens
    • min: 6 tokens
    • mean: 10.15 tokens
    • max: 15 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task269_csrg_counterfactual_story_generation

  • Dataset: task269_csrg_counterfactual_story_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 49 tokens
    • mean: 80.0 tokens
    • max: 111 tokens
    • min: 53 tokens
    • mean: 79.62 tokens
    • max: 116 tokens
    • min: 48 tokens
    • mean: 79.46 tokens
    • max: 114 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task233_iirc_link_exists_classification

  • Dataset: task233_iirc_link_exists_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 145 tokens
    • mean: 235.46 tokens
    • max: 256 tokens
    • min: 142 tokens
    • mean: 233.26 tokens
    • max: 256 tokens
    • min: 151 tokens
    • mean: 234.97 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task161_count_words_containing_letter

  • Dataset: task161_count_words_containing_letter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 27 tokens
    • mean: 30.99 tokens
    • max: 53 tokens
    • min: 27 tokens
    • mean: 30.79 tokens
    • max: 61 tokens
    • min: 27 tokens
    • mean: 30.48 tokens
    • max: 42 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1205_atomic_classification_isafter

  • Dataset: task1205_atomic_classification_isafter
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 20.92 tokens
    • max: 37 tokens
    • min: 14 tokens
    • mean: 20.64 tokens
    • max: 35 tokens
    • min: 14 tokens
    • mean: 21.52 tokens
    • max: 37 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task571_recipe_nlg_ner_generation

  • Dataset: task571_recipe_nlg_ner_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 118.42 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 118.89 tokens
    • max: 256 tokens
    • min: 6 tokens
    • mean: 111.25 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1292_yelp_review_full_text_categorization

  • Dataset: task1292_yelp_review_full_text_categorization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 136.77 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 147.0 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 146.33 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task428_senteval_inversion

  • Dataset: task428_senteval_inversion
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 16.68 tokens
    • max: 32 tokens
    • min: 7 tokens
    • mean: 14.59 tokens
    • max: 31 tokens
    • min: 7 tokens
    • mean: 15.26 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task311_race_question_generation

  • Dataset: task311_race_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 115 tokens
    • mean: 254.61 tokens
    • max: 256 tokens
    • min: 137 tokens
    • mean: 254.41 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 255.51 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task429_senteval_tense

  • Dataset: task429_senteval_tense
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 15.82 tokens
    • max: 37 tokens
    • min: 6 tokens
    • mean: 14.07 tokens
    • max: 33 tokens
    • min: 7 tokens
    • mean: 15.3 tokens
    • max: 36 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task403_creak_commonsense_inference

  • Dataset: task403_creak_commonsense_inference
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 30.14 tokens
    • max: 104 tokens
    • min: 13 tokens
    • mean: 29.54 tokens
    • max: 108 tokens
    • min: 13 tokens
    • mean: 29.26 tokens
    • max: 122 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task929_products_reviews_classification

  • Dataset: task929_products_reviews_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 69.61 tokens
    • max: 126 tokens
    • min: 6 tokens
    • mean: 70.61 tokens
    • max: 123 tokens
    • min: 6 tokens
    • mean: 70.68 tokens
    • max: 123 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task582_naturalquestion_answer_generation

  • Dataset: task582_naturalquestion_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 11.7 tokens
    • max: 25 tokens
    • min: 10 tokens
    • mean: 11.63 tokens
    • max: 24 tokens
    • min: 10 tokens
    • mean: 11.71 tokens
    • max: 25 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task237_iirc_answer_from_subtext_answer_generation

  • Dataset: task237_iirc_answer_from_subtext_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 22 tokens
    • mean: 66.3 tokens
    • max: 256 tokens
    • min: 25 tokens
    • mean: 64.95 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 61.31 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task050_multirc_answerability

  • Dataset: task050_multirc_answerability
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 15 tokens
    • mean: 32.56 tokens
    • max: 112 tokens
    • min: 14 tokens
    • mean: 31.62 tokens
    • max: 93 tokens
    • min: 15 tokens
    • mean: 32.26 tokens
    • max: 159 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task184_break_generate_question

  • Dataset: task184_break_generate_question
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 39.72 tokens
    • max: 147 tokens
    • min: 13 tokens
    • mean: 39.07 tokens
    • max: 149 tokens
    • min: 13 tokens
    • mean: 39.81 tokens
    • max: 148 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task669_ambigqa_answer_generation

  • Dataset: task669_ambigqa_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 12.91 tokens
    • max: 23 tokens
    • min: 10 tokens
    • mean: 12.84 tokens
    • max: 27 tokens
    • min: 11 tokens
    • mean: 12.74 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task169_strategyqa_sentence_generation

  • Dataset: task169_strategyqa_sentence_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 19 tokens
    • mean: 35.06 tokens
    • max: 65 tokens
    • min: 22 tokens
    • mean: 34.24 tokens
    • max: 60 tokens
    • min: 19 tokens
    • mean: 33.37 tokens
    • max: 65 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task500_scruples_anecdotes_title_generation

  • Dataset: task500_scruples_anecdotes_title_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 225.48 tokens
    • max: 256 tokens
    • min: 31 tokens
    • mean: 233.04 tokens
    • max: 256 tokens
    • min: 27 tokens
    • mean: 235.04 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task241_tweetqa_classification

  • Dataset: task241_tweetqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 31 tokens
    • mean: 61.77 tokens
    • max: 92 tokens
    • min: 36 tokens
    • mean: 62.17 tokens
    • max: 106 tokens
    • min: 31 tokens
    • mean: 61.71 tokens
    • max: 92 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1345_glue_qqp_question_paraprashing

  • Dataset: task1345_glue_qqp_question_paraprashing
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.8 tokens
    • max: 60 tokens
    • min: 6 tokens
    • mean: 15.75 tokens
    • max: 69 tokens
    • min: 6 tokens
    • mean: 16.69 tokens
    • max: 51 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task218_rocstories_swap_order_answer_generation

  • Dataset: task218_rocstories_swap_order_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 48 tokens
    • mean: 72.69 tokens
    • max: 118 tokens
    • min: 48 tokens
    • mean: 72.72 tokens
    • max: 102 tokens
    • min: 47 tokens
    • mean: 72.12 tokens
    • max: 106 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task613_politifact_text_generation

  • Dataset: task613_politifact_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 24.85 tokens
    • max: 75 tokens
    • min: 7 tokens
    • mean: 23.4 tokens
    • max: 56 tokens
    • min: 5 tokens
    • mean: 22.9 tokens
    • max: 61 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1167_penn_treebank_coarse_pos_tagging

  • Dataset: task1167_penn_treebank_coarse_pos_tagging
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 53.87 tokens
    • max: 200 tokens
    • min: 16 tokens
    • mean: 53.76 tokens
    • max: 220 tokens
    • min: 16 tokens
    • mean: 55.02 tokens
    • max: 202 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1422_mathqa_physics

  • Dataset: task1422_mathqa_physics
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 34 tokens
    • mean: 72.76 tokens
    • max: 164 tokens
    • min: 38 tokens
    • mean: 71.89 tokens
    • max: 157 tokens
    • min: 39 tokens
    • mean: 72.78 tokens
    • max: 155 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task247_dream_answer_generation

  • Dataset: task247_dream_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 38 tokens
    • mean: 160.09 tokens
    • max: 256 tokens
    • min: 39 tokens
    • mean: 158.97 tokens
    • max: 256 tokens
    • min: 41 tokens
    • mean: 167.84 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task199_mnli_classification

  • Dataset: task199_mnli_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 13 tokens
    • mean: 43.48 tokens
    • max: 127 tokens
    • min: 11 tokens
    • mean: 44.59 tokens
    • max: 149 tokens
    • min: 11 tokens
    • mean: 44.16 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task164_mcscript_question_answering_text

  • Dataset: task164_mcscript_question_answering_text
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 150 tokens
    • mean: 201.24 tokens
    • max: 256 tokens
    • min: 150 tokens
    • mean: 201.08 tokens
    • max: 256 tokens
    • min: 142 tokens
    • mean: 201.39 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1541_agnews_classification

  • Dataset: task1541_agnews_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 53.49 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 52.72 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 54.13 tokens
    • max: 161 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task516_senteval_conjoints_inversion

  • Dataset: task516_senteval_conjoints_inversion
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 20.15 tokens
    • max: 34 tokens
    • min: 8 tokens
    • mean: 18.98 tokens
    • max: 34 tokens
    • min: 8 tokens
    • mean: 18.92 tokens
    • max: 34 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task294_storycommonsense_motiv_text_generation

  • Dataset: task294_storycommonsense_motiv_text_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 40.72 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 41.23 tokens
    • max: 86 tokens
    • min: 14 tokens
    • mean: 40.31 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task501_scruples_anecdotes_post_type_verification

  • Dataset: task501_scruples_anecdotes_post_type_verification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 230.72 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 234.85 tokens
    • max: 256 tokens
    • min: 18 tokens
    • mean: 234.19 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task213_rocstories_correct_ending_classification

  • Dataset: task213_rocstories_correct_ending_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 62 tokens
    • mean: 86.09 tokens
    • max: 125 tokens
    • min: 60 tokens
    • mean: 85.37 tokens
    • max: 131 tokens
    • min: 59 tokens
    • mean: 85.96 tokens
    • max: 131 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task821_protoqa_question_generation

  • Dataset: task821_protoqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 14.97 tokens
    • max: 61 tokens
    • min: 5 tokens
    • mean: 15.01 tokens
    • max: 35 tokens
    • min: 5 tokens
    • mean: 13.99 tokens
    • max: 93 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task493_review_polarity_classification

  • Dataset: task493_review_polarity_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 100.77 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 106.77 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 112.99 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task308_jeopardy_answer_generation_all

  • Dataset: task308_jeopardy_answer_generation_all
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 27.95 tokens
    • max: 50 tokens
    • min: 10 tokens
    • mean: 26.96 tokens
    • max: 44 tokens
    • min: 9 tokens
    • mean: 27.41 tokens
    • max: 48 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1595_event2mind_text_generation_1

  • Dataset: task1595_event2mind_text_generation_1
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 9.86 tokens
    • max: 18 tokens
    • min: 6 tokens
    • mean: 9.95 tokens
    • max: 20 tokens
    • min: 6 tokens
    • mean: 10.04 tokens
    • max: 20 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task040_qasc_question_generation

  • Dataset: task040_qasc_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 15.06 tokens
    • max: 29 tokens
    • min: 7 tokens
    • mean: 15.04 tokens
    • max: 30 tokens
    • min: 8 tokens
    • mean: 13.86 tokens
    • max: 32 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task231_iirc_link_classification

  • Dataset: task231_iirc_link_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 179 tokens
    • mean: 246.11 tokens
    • max: 256 tokens
    • min: 170 tokens
    • mean: 246.14 tokens
    • max: 256 tokens
    • min: 161 tokens
    • mean: 247.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1727_wiqa_what_is_the_effect

  • Dataset: task1727_wiqa_what_is_the_effect
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 95.88 tokens
    • max: 183 tokens
    • min: 44 tokens
    • mean: 95.98 tokens
    • max: 185 tokens
    • min: 43 tokens
    • mean: 96.22 tokens
    • max: 183 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task578_curiosity_dialogs_answer_generation

  • Dataset: task578_curiosity_dialogs_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 10 tokens
    • mean: 229.94 tokens
    • max: 256 tokens
    • min: 118 tokens
    • mean: 235.71 tokens
    • max: 256 tokens
    • min: 12 tokens
    • mean: 229.13 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task310_race_classification

  • Dataset: task310_race_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 101 tokens
    • mean: 255.03 tokens
    • max: 256 tokens
    • min: 218 tokens
    • mean: 255.8 tokens
    • max: 256 tokens
    • min: 101 tokens
    • mean: 255.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task309_race_answer_generation

  • Dataset: task309_race_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 75 tokens
    • mean: 255.04 tokens
    • max: 256 tokens
    • min: 204 tokens
    • mean: 255.54 tokens
    • max: 256 tokens
    • min: 75 tokens
    • mean: 255.25 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task379_agnews_topic_classification

  • Dataset: task379_agnews_topic_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 20 tokens
    • mean: 54.82 tokens
    • max: 193 tokens
    • min: 20 tokens
    • mean: 54.53 tokens
    • max: 175 tokens
    • min: 21 tokens
    • mean: 54.86 tokens
    • max: 187 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task030_winogrande_full_person

  • Dataset: task030_winogrande_full_person
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 7.6 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.49 tokens
    • max: 12 tokens
    • min: 7 tokens
    • mean: 7.37 tokens
    • max: 11 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1540_parsed_pdfs_summarization

  • Dataset: task1540_parsed_pdfs_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 186.77 tokens
    • max: 256 tokens
    • min: 46 tokens
    • mean: 190.07 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 192.05 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task039_qasc_find_overlapping_words

  • Dataset: task039_qasc_find_overlapping_words
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 16 tokens
    • mean: 30.48 tokens
    • max: 55 tokens
    • min: 16 tokens
    • mean: 30.06 tokens
    • max: 57 tokens
    • min: 16 tokens
    • mean: 30.67 tokens
    • max: 60 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1206_atomic_classification_isbefore

  • Dataset: task1206_atomic_classification_isbefore
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 21.26 tokens
    • max: 40 tokens
    • min: 14 tokens
    • mean: 20.84 tokens
    • max: 31 tokens
    • min: 14 tokens
    • mean: 21.35 tokens
    • max: 31 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task157_count_vowels_and_consonants

  • Dataset: task157_count_vowels_and_consonants
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 24 tokens
    • mean: 28.03 tokens
    • max: 41 tokens
    • min: 24 tokens
    • mean: 27.93 tokens
    • max: 41 tokens
    • min: 24 tokens
    • mean: 28.34 tokens
    • max: 39 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task339_record_answer_generation

  • Dataset: task339_record_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 171 tokens
    • mean: 234.93 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 234.22 tokens
    • max: 256 tokens
    • min: 171 tokens
    • mean: 232.25 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task453_swag_answer_generation

  • Dataset: task453_swag_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 9 tokens
    • mean: 18.53 tokens
    • max: 60 tokens
    • min: 9 tokens
    • mean: 18.23 tokens
    • max: 63 tokens
    • min: 9 tokens
    • mean: 17.5 tokens
    • max: 55 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task848_pubmedqa_classification

  • Dataset: task848_pubmedqa_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 21 tokens
    • mean: 248.82 tokens
    • max: 256 tokens
    • min: 21 tokens
    • mean: 249.96 tokens
    • max: 256 tokens
    • min: 84 tokens
    • mean: 251.72 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task673_google_wellformed_query_classification

  • Dataset: task673_google_wellformed_query_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 11.57 tokens
    • max: 27 tokens
    • min: 6 tokens
    • mean: 11.23 tokens
    • max: 24 tokens
    • min: 6 tokens
    • mean: 11.34 tokens
    • max: 22 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task676_ollie_relationship_answer_generation

  • Dataset: task676_ollie_relationship_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 29 tokens
    • mean: 51.45 tokens
    • max: 113 tokens
    • min: 29 tokens
    • mean: 49.38 tokens
    • max: 134 tokens
    • min: 30 tokens
    • mean: 51.68 tokens
    • max: 113 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task268_casehold_legal_answer_generation

  • Dataset: task268_casehold_legal_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 235 tokens
    • mean: 255.96 tokens
    • max: 256 tokens
    • min: 156 tokens
    • mean: 255.37 tokens
    • max: 256 tokens
    • min: 226 tokens
    • mean: 255.94 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task844_financial_phrasebank_classification

  • Dataset: task844_financial_phrasebank_classification
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 14 tokens
    • mean: 39.74 tokens
    • max: 86 tokens
    • min: 13 tokens
    • mean: 38.28 tokens
    • max: 78 tokens
    • min: 15 tokens
    • mean: 39.06 tokens
    • max: 86 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task330_gap_answer_generation

  • Dataset: task330_gap_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 107.2 tokens
    • max: 256 tokens
    • min: 44 tokens
    • mean: 108.16 tokens
    • max: 256 tokens
    • min: 45 tokens
    • mean: 110.56 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task595_mocha_answer_generation

  • Dataset: task595_mocha_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 44 tokens
    • mean: 94.35 tokens
    • max: 178 tokens
    • min: 21 tokens
    • mean: 96.06 tokens
    • max: 256 tokens
    • min: 19 tokens
    • mean: 118.22 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task1285_kpa_keypoint_matching

  • Dataset: task1285_kpa_keypoint_matching
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 30 tokens
    • mean: 52.36 tokens
    • max: 92 tokens
    • min: 29 tokens
    • mean: 50.15 tokens
    • max: 84 tokens
    • min: 31 tokens
    • mean: 53.13 tokens
    • max: 88 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task234_iirc_passage_line_answer_generation

  • Dataset: task234_iirc_passage_line_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 143 tokens
    • mean: 234.76 tokens
    • max: 256 tokens
    • min: 155 tokens
    • mean: 235.18 tokens
    • max: 256 tokens
    • min: 146 tokens
    • mean: 235.94 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task494_review_polarity_answer_generation

  • Dataset: task494_review_polarity_answer_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 106.28 tokens
    • max: 256 tokens
    • min: 23 tokens
    • mean: 111.87 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 112.42 tokens
    • max: 249 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task670_ambigqa_question_generation

  • Dataset: task670_ambigqa_question_generation
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 11 tokens
    • mean: 12.66 tokens
    • max: 26 tokens
    • min: 11 tokens
    • mean: 12.49 tokens
    • max: 23 tokens
    • min: 11 tokens
    • mean: 12.24 tokens
    • max: 18 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

task289_gigaword_summarization

  • Dataset: task289_gigaword_summarization
  • Size: 1,018 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 51.54 tokens
    • max: 87 tokens
    • min: 27 tokens
    • mean: 51.94 tokens
    • max: 87 tokens
    • min: 25 tokens
    • mean: 51.44 tokens
    • max: 87 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

npr

  • Dataset: npr
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 12.33 tokens
    • max: 29 tokens
    • min: 14 tokens
    • mean: 148.6 tokens
    • max: 256 tokens
    • min: 14 tokens
    • mean: 115.37 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

nli

  • Dataset: nli
  • Size: 49,676 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 20.98 tokens
    • max: 107 tokens
    • min: 4 tokens
    • mean: 11.92 tokens
    • max: 42 tokens
    • min: 4 tokens
    • mean: 12.04 tokens
    • max: 32 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

SimpleWiki

  • Dataset: SimpleWiki
  • Size: 5,070 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 29.18 tokens
    • max: 116 tokens
    • min: 8 tokens
    • mean: 33.55 tokens
    • max: 156 tokens
    • min: 9 tokens
    • mean: 56.1 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

amazon_review_2018

  • Dataset: amazon_review_2018
  • Size: 99,352 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 11.43 tokens
    • max: 31 tokens
    • min: 11 tokens
    • mean: 86.31 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 70.62 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

ccnews_title_text

  • Dataset: ccnews_title_text
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 15.63 tokens
    • max: 60 tokens
    • min: 24 tokens
    • mean: 209.51 tokens
    • max: 256 tokens
    • min: 20 tokens
    • mean: 197.07 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

agnews

  • Dataset: agnews
  • Size: 44,606 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 12.05 tokens
    • max: 102 tokens
    • min: 11 tokens
    • mean: 40.4 tokens
    • max: 256 tokens
    • min: 11 tokens
    • mean: 46.18 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

xsum

  • Dataset: xsum
  • Size: 10,140 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 27.73 tokens
    • max: 73 tokens
    • min: 33 tokens
    • mean: 224.87 tokens
    • max: 256 tokens
    • min: 48 tokens
    • mean: 230.01 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

msmarco

  • Dataset: msmarco
  • Size: 173,354 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 8.96 tokens
    • max: 32 tokens
    • min: 19 tokens
    • mean: 78.76 tokens
    • max: 235 tokens
    • min: 16 tokens
    • mean: 79.64 tokens
    • max: 218 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

yahoo_answers_title_answer

  • Dataset: yahoo_answers_title_answer
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.99 tokens
    • max: 47 tokens
    • min: 5 tokens
    • mean: 76.97 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 91.49 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

squad_pairs

  • Dataset: squad_pairs
  • Size: 24,838 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 14.24 tokens
    • max: 48 tokens
    • min: 32 tokens
    • mean: 152.76 tokens
    • max: 256 tokens
    • min: 33 tokens
    • mean: 163.22 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

wow

  • Dataset: wow
  • Size: 29,908 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 88.31 tokens
    • max: 256 tokens
    • min: 100 tokens
    • mean: 111.97 tokens
    • max: 166 tokens
    • min: 80 tokens
    • mean: 113.24 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_counterfactual-avs_triplets

  • Dataset: mteb-amazon_counterfactual-avs_triplets
  • Size: 4,055 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 12 tokens
    • mean: 26.99 tokens
    • max: 109 tokens
    • min: 12 tokens
    • mean: 27.29 tokens
    • max: 137 tokens
    • min: 12 tokens
    • mean: 26.56 tokens
    • max: 83 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_massive_intent-avs_triplets

  • Dataset: mteb-amazon_massive_intent-avs_triplets
  • Size: 11,661 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.43 tokens
    • max: 27 tokens
    • min: 3 tokens
    • mean: 9.19 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 9.5 tokens
    • max: 28 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_massive_scenario-avs_triplets

  • Dataset: mteb-amazon_massive_scenario-avs_triplets
  • Size: 11,661 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.61 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 9.01 tokens
    • max: 21 tokens
    • min: 3 tokens
    • mean: 9.48 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-amazon_reviews_multi-avs_triplets

  • Dataset: mteb-amazon_reviews_multi-avs_triplets
  • Size: 198,192 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 46.91 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 49.58 tokens
    • max: 256 tokens
    • min: 7 tokens
    • mean: 47.98 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-banking77-avs_triplets

  • Dataset: mteb-banking77-avs_triplets
  • Size: 10,139 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 16.61 tokens
    • max: 98 tokens
    • min: 5 tokens
    • mean: 15.78 tokens
    • max: 87 tokens
    • min: 5 tokens
    • mean: 16.11 tokens
    • max: 83 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-emotion-avs_triplets

  • Dataset: mteb-emotion-avs_triplets
  • Size: 16,224 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 22.02 tokens
    • max: 67 tokens
    • min: 5 tokens
    • mean: 17.48 tokens
    • max: 65 tokens
    • min: 5 tokens
    • mean: 22.16 tokens
    • max: 72 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-imdb-avs_triplets

  • Dataset: mteb-imdb-avs_triplets
  • Size: 24,839 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 18 tokens
    • mean: 208.76 tokens
    • max: 256 tokens
    • min: 52 tokens
    • mean: 223.82 tokens
    • max: 256 tokens
    • min: 41 tokens
    • mean: 210.03 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-mtop_domain-avs_triplets

  • Dataset: mteb-mtop_domain-avs_triplets
  • Size: 15,715 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.11 tokens
    • max: 35 tokens
    • min: 4 tokens
    • mean: 9.66 tokens
    • max: 24 tokens
    • min: 4 tokens
    • mean: 10.16 tokens
    • max: 29 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-mtop_intent-avs_triplets

  • Dataset: mteb-mtop_intent-avs_triplets
  • Size: 15,715 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 4 tokens
    • mean: 10.08 tokens
    • max: 26 tokens
    • min: 3 tokens
    • mean: 9.78 tokens
    • max: 27 tokens
    • min: 4 tokens
    • mean: 10.11 tokens
    • max: 28 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-toxic_conversations_50k-avs_triplets

  • Dataset: mteb-toxic_conversations_50k-avs_triplets
  • Size: 49,677 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 68.8 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 90.19 tokens
    • max: 252 tokens
    • min: 3 tokens
    • mean: 64.54 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

mteb-tweet_sentiment_extraction-avs_triplets

  • Dataset: mteb-tweet_sentiment_extraction-avs_triplets
  • Size: 27,373 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 20.82 tokens
    • max: 60 tokens
    • min: 3 tokens
    • mean: 20.02 tokens
    • max: 56 tokens
    • min: 4 tokens
    • mean: 20.66 tokens
    • max: 50 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

covid-bing-query-gpt4-avs_triplets

  • Dataset: covid-bing-query-gpt4-avs_triplets
  • Size: 5,070 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 15.08 tokens
    • max: 33 tokens
    • min: 17 tokens
    • mean: 37.42 tokens
    • max: 239 tokens
    • min: 16 tokens
    • mean: 37.25 tokens
    • max: 100 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 18,269 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 15.81 tokens
    • max: 64 tokens
    • min: 5 tokens
    • mean: 144.25 tokens
    • max: 256 tokens
    • min: 5 tokens
    • mean: 143.7 tokens
    • max: 256 tokens
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • learning_rate: 5.656854249492381e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5.656854249492381e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss medi-mteb-dev_cosine_accuracy
0 0 - - 0.8358
0.1308 500 2.6713 1.1708 0.8820
0.2616 1000 1.9946 1.1040 0.8890
0.3925 1500 2.0138 1.0559 0.8955
0.5233 2000 1.7733 1.0154 0.8976
0.6541 2500 1.8934 1.0145 0.8990
0.7849 3000 1.7916 1.0166 0.8990
0.9158 3500 1.8491 0.9818 0.8981
1.0466 4000 1.7568 0.9473 0.9031
1.1774 4500 1.8666 1.0801 0.9003
1.3082 5000 1.6883 0.9535 0.9008
1.4390 5500 1.7082 1.0652 0.9028
1.5699 6000 1.6634 1.0519 0.9040
1.7007 6500 1.689 0.9920 0.9039
1.8315 7000 1.6129 1.0213 0.9021
1.9623 7500 1.576 0.9993 0.9033
2.0931 8000 1.6392 1.0826 0.9069
2.2240 8500 1.5947 1.1802 0.9063
2.3548 9000 1.6222 1.2468 0.9075
2.4856 9500 1.4471 1.0080 0.9077
2.6164 10000 1.5689 1.1530 0.9088
2.7473 10500 1.4836 1.0531 0.9080
2.8781 11000 1.525 1.0097 0.9091
3.0089 11500 1.4068 1.0630 0.9071
3.1397 12000 1.5666 0.9643 0.9091
3.2705 12500 1.4479 1.0455 0.9077
3.4014 13000 1.5516 1.0711 0.9109
3.5322 13500 1.3551 0.9991 0.9093
3.6630 14000 1.4498 1.0136 0.9093
3.7938 14500 1.3856 1.0710 0.9097
3.9246 15000 1.4329 1.0074 0.9097
4.0555 15500 1.3455 1.0328 0.9094
4.1863 16000 1.4601 1.0259 0.9078
4.3171 16500 1.3684 1.0295 0.9120
4.4479 17000 1.3637 1.0637 0.9090
4.5788 17500 1.3688 1.0929 0.9100
4.7096 18000 1.3419 1.1102 0.9124
4.8404 18500 1.3378 0.9625 0.9129
4.9712 19000 1.3224 1.0812 0.9126
5.1020 19500 1.3579 1.0317 0.9121
5.2329 20000 1.3409 1.0622 0.9107
5.3637 20500 1.3929 1.1232 0.9113
5.4945 21000 1.213 1.0926 0.9123
5.6253 21500 1.313 1.0791 0.9118
5.7561 22000 1.2606 1.0581 0.9119
5.8870 22500 1.3094 1.0322 0.9134
6.0178 23000 1.2102 1.0039 0.9106
6.1486 23500 1.3686 1.0815 0.9140
6.2794 24000 1.2467 1.0143 0.9126
6.4103 24500 1.3445 1.0778 0.9116
6.5411 25000 1.1894 0.9941 0.9140
6.6719 25500 1.2617 1.0546 0.9121
6.8027 26000 1.2042 1.0126 0.9130
6.9335 26500 1.2559 1.0516 0.9142
7.0644 27000 1.2031 0.9957 0.9146
7.1952 27500 1.2866 1.0564 0.9142
7.3260 28000 1.2477 1.0420 0.9135
7.4568 28500 1.1961 1.0116 0.9151
7.5877 29000 1.227 1.0091 0.9154
7.7185 29500 1.1952 1.0307 0.9146
7.8493 30000 1.192 0.9344 0.9144
7.9801 30500 1.1871 1.0943 0.9151
8.1109 31000 1.2267 1.0049 0.9150
8.2418 31500 1.1928 1.0673 0.9149
8.3726 32000 1.2942 1.0980 0.9148
8.5034 32500 1.1099 1.0380 0.9151
8.6342 33000 1.1882 1.0734 0.9138
8.7650 33500 1.1365 1.0677 0.9144
8.8959 34000 1.2215 1.0256 0.9160
9.0267 34500 1.0926 1.0198 0.9142
9.1575 35000 1.269 1.0395 0.9160
9.2883 35500 1.1528 1.0306 0.9152
9.4192 36000 1.2324 1.0607 0.9158
9.5500 36500 1.1187 1.0418 0.9151
9.6808 37000 1.1722 1.0443 0.9151
9.8116 37500 1.1149 1.0457 0.9152
9.9424 38000 1.1751 1.0245 0.9156

Framework Versions

  • Python: 3.10.10
  • Sentence Transformers: 3.4.0.dev0
  • Transformers: 4.46.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 0.34.2
  • Datasets: 2.21.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
8
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-randproj-512-final

Finetuned
(171)
this model

Evaluation results