librarian-bot's picture
Librarian Bot: Update Hugging Face dataset ID
affe996 verified
|
raw
history blame
No virus
3.09 kB
---
language:
- en
license: mit
datasets:
- glue
- facebook/anli
pipeline_tag: zero-shot-classification
base_model: BAAI/bge-large-en
model-index:
- name: bge-large-en-mnli-anli
results: []
---
# bge-large-en-mnli-anli
This model is a fine-tuned version of [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) on the glue and ANLI dataset.
## Model description
[RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder](https://arxiv.org/abs/2205.12035).
Shitao Xiao, Zheng Liu, Yingxia Shao, Zhao Cao, arXiv 2022
## How to use the model
### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="mjwong/bge-large-en-mnli-anli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
```
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_class=True)
```
### With manual PyTorch
The model can also be applied on NLI tasks like so:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# device = "cuda:0" or "cpu"
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "mjwong/bge-large-en-mnli-anli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "But I thought you'd sworn off coffee."
hypothesis = "I thought that you vowed to drink more coffee."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device))
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 2) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Eval results
The model was also evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy.
|Datasets|mnli_dev_m|mnli_dev_mm|anli_test_r1|anli_test_r2|anli_test_r3|
| :---: | :---: | :---: | :---: | :---: | :---: |
|[bge-large-en-mnli-anli](https://huggingface.co/mjwong/bge-large-en-mnli-anli)|0.846|0.842|0.602|0.451|0.452|
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3