YusuphaJuwara's picture
Upload roberta_fever/README.md with huggingface_hub
f09e210 verified
metadata
tags:
  - NLI
  - Natural Language Inference
  - FEVER
  - text-classification
language: en
task: NLI and generation of Adversarial Examples
datasets: FEVER
license: unknown
metrics:
  epoch:
    - 0
  train_loss:
    - 0.0019978578202426434
  val_loss:
    - 2.2035093307495117
  train_acc:
    - 1
  val_acc:
    - 0.7333915829658508
  train_f1_score:
    - 1
  val_f1_score:
    - 0.7333915829658508
  best_metric: 2.2035093307495117
model-index:
  - name: nli-fever
    results:
      - task:
          type: nlp
          name: Multi-Lingual Natural Language Processing
        dataset:
          name: FEVER
          type: fever
        metrics:
          - type: acc
            value: '0.73'
            name: Accuracy
            verified: false

NLI-FEVER Model

This model is fine-tuned for Natural Language Inference (NLI) tasks using the FEVER dataset.

Model description

This model is based on roberta and has been fine-tuned for NLI tasks. It classifies a given pair of premise and hypothesis into three categories: entailment, contradiction, or neutral.

Intended uses & limitations

This model is intended for use in NLI tasks, particularly those related to fact-checking and verifying information. It should not be used for tasks it wasn't explicitly trained for.

Training and evaluation data

The model was trained on the FEVER (Fact Extraction and VERification) dataset.

Training procedure

The model was trained for [0] epochs with a final loss of 2.2035093307495117, an accuracy of 0.7333915829658508, and F1 score of 0.7333915829658508.

How to use

You can use this model directly with a pipeline for text classification:

from transformers import pipeline

classifier = pipeline("text-classification", model="YusuphaJuwara/nli-fever")
result = classifier("premise", "hypothesis")
print(result)

Or, you can use it directly:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("YusuphaJuwara/nli-fever")
model = AutoModelForSequenceClassification.from_pretrained("YusuphaJuwara/nli-fever")

inputs = tokenizer("premise", "hypothesis", return_tensors="pt")
outputs = model(**inputs)
predictions = outputs.logits.argmax(-1)
print(predictions)

Saved Metrics

This model repository includes a metrics.json file containing detailed training metrics. You can load these metrics using the following code:

from huggingface_hub import hf_hub_download
import json

metrics_file = hf_hub_download(repo_id="YusuphaJuwara/nli-fever", filename="metrics.json")
with open(metrics_file, 'r') as f:
    metrics = json.load(f)

# Now you can access metrics like:
print("Last epoch: ", metrics['last_epoch'])
print("Final validation loss: ", metrics['val_losses'][-1])
print("Final validation accuracy: ", metrics['val_accuracies'][-1])

These metrics can be useful for continuing training from the last epoch or for detailed analysis of the training process.

Training results

Include a plot of your training metrics here

Limitations and bias

This model may exhibit biases present in the training data. Always validate results and use the model responsibly.

Plots

Labels distribution plots loss plots accuracy plots f1 score plots confusion matrix plots precision recall curve plots roc curve plots