e5-large-mnli / README.md
mjwong's picture
Update README.md
16fd91c
|
raw
history blame
2.64 kB
metadata
datasets:
  - glue
model-index:
  - name: e5-large-mnli
    results: []
pipeline_tag: zero-shot-classification
language:
  - en
license: mit

e5-large-mnli

This model is a fine-tuned version of intfloat/e5-large on the glue dataset.

Model description

Text Embeddings by Weakly-Supervised Contrastive Pre-training. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022

How to use the model

The model can be loaded with the zero-shot-classification pipeline like so:

from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="mjwong/e5-large-mnli")

You can then use this pipeline to classify sequences into any of the class names you specify.

sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'sequence': 'one day I will see the world',
# 'labels': ['travel', 'dancing', 'cooking'],
# 'scores': [0.9494319558143616, 0.044598229229450226, 0.00596982054412365]}

If more than one candidate label can be correct, pass multi_class=True to calculate each class independently:

candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
#{'sequence': 'one day I will see the world',
# 'labels': ['exploration', 'travel', 'dancing', 'cooking'],
# 'scores': [0.9918234944343567,
#  0.9867327213287354,
#  0.40335655212402344,
#  0.0020157278049737215]}

Eval results

The model was evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy.

Datasets mnli_dev_m mnli_dev_mm anli_test_r1 anli_test_r2 anli_test_r2
e5-base-mnli 0.840 0.839 0.231 0.285 0.309
e5-large-mnli 0.868 0.869 0.301 0.296 0.294
e5-large-mnli-anli 0.843 0.848 0.646 0.484 0.458

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2

Framework versions

  • Transformers 4.28.1
  • Pytorch 1.12.1+cu116
  • Datasets 2.11.0
  • Tokenizers 0.12.1