--- datasets: - glue - anli model-index: - name: e5-large-mnli-anli results: [] pipeline_tag: zero-shot-classification language: - en license: mit --- # e5-large-mnli-anli This model is a fine-tuned version of [intfloat/e5-large](https://huggingface.co/intfloat/e5-large) on the glue (mnli) and anli dataset. ## Model description [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 ## How to use the model The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="mjwong/e5-large-mnli-anli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) #{'sequence': 'one day I will see the world', # 'labels': ['travel', 'dancing', 'cooking'], # 'scores': [0.9878318905830383, 0.01044005248695612, 0.001728130504488945]} ``` If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently: ```python candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_class=True) #{'sequence': 'one day I will see the world', # 'labels': ['exploration', 'travel', 'dancing', 'cooking'], # 'scores': [0.9956096410751343, # 0.9929478764533997, # 0.21706733107566833, # 0.0005817742203362286]} ``` ### Eval results The model was evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy. |Datasets|mnli_dev_m|mnli_dev_mm|anli_test_r1|anli_test_r2|anli_test_r3| | :---: | :---: | :---: | :---: | :---: | :---: | |[e5-base-mnli](https://huggingface.co/mjwong/e5-base-mnli)|0.840|0.839|0.231|0.285|0.309| |[e5-base-v2-mnli](https://huggingface.co/mjwong/e5-base-v2-mnli)|0.844|0.838|0.253|0.288|0.301| |[e5-large-mnli](https://huggingface.co/mjwong/e5-large-mnli)|0.868|0.869|0.301|0.296|0.294| |[e5-large-v2-mnli](https://huggingface.co/mjwong/e5-large-v2-mnli)|0.875|0.876|0.354|0.298|0.313| |[e5-large-unsupervised-mnli](https://huggingface.co/mjwong/e5-large-unsupervised-mnli)|0.865|0.867|0.314|0.285|0.303| |[e5-large-mnli-anli](https://huggingface.co/mjwong/e5-large-mnli-anli)|0.843|0.848|0.646|0.484|0.458| |[e5-large-unsupervised-mnli-anli](https://huggingface.co/mjwong/e5-large-unsupervised-mnli-anli)|0.836|0.842|0.634|0.481|0.478| ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Framework versions - Transformers 4.28.1 - Pytorch 1.12.1+cu116 - Datasets 2.11.0 - Tokenizers 0.12.1