AyoubChLin commited on
Commit
6a4f769
1 Parent(s): 144e7cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,13 +15,13 @@ tags:
15
  ---
16
 
17
  # Huggingface Model: BART-MNLI-ZeroShot-Text-Classification
18
- This is a Huggingface model fine-tuned on the CNN news dataset for zero-shot text classification task using BART-MNLI. The model achieved an f1 score of 94% and an accuracy of 94% on the CNN test dataset with a maximum length of 128 tokens.
19
 
20
  ## Authors
21
  This work was done by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/)
22
 
23
  ## Model Architecture
24
- The model architecture is based on the BART-MNLI transformer model. BART (Bidirectional and Auto-Regressive Transformers) is a denoising autoencoder that is pre-trained on a large corpus of text and fine-tuned on downstream natural language processing tasks.
25
 
26
  ## Dataset
27
  The CNN news dataset was used for fine-tuning the model. This dataset contains news articles from the CNN website and is labeled into 6 categories, including politics, health, entertainment, tech, travel, world, and sports.
@@ -30,7 +30,7 @@ The CNN news dataset was used for fine-tuning the model. This dataset contains n
30
  The model was fine-tuned for 1 epoch on a maximum length of 256 tokens. The training took approximately 6 hours to complete.
31
 
32
  ## Evaluation Metrics
33
- The model achieved an f1 score of 94% and an accuracy of 94% on the CNN test dataset with a maximum length of 128 tokens.
34
 
35
  # Usage
36
  The model can be used for zero-shot text classification tasks on news articles. It can be accessed via the Huggingface Transformers library using the following code:
 
15
  ---
16
 
17
  # Huggingface Model: BART-MNLI-ZeroShot-Text-Classification
18
+ This is a Huggingface model fine-tuned on the CNN news dataset for zero-shot text classification task using DistilBART-MNLI. The model achieved an f1 score of 93% and an accuracy of 93% on the CNN test dataset with a maximum length of 128 tokens.
19
 
20
  ## Authors
21
  This work was done by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/)
22
 
23
  ## Model Architecture
24
+ The model architecture is based on the DistilBART-MNLI transformer model. DistilBART is a smaller and faster version of BART that is pre-trained on a large corpus of text and fine-tuned on downstream natural language processing tasks.
25
 
26
  ## Dataset
27
  The CNN news dataset was used for fine-tuning the model. This dataset contains news articles from the CNN website and is labeled into 6 categories, including politics, health, entertainment, tech, travel, world, and sports.
 
30
  The model was fine-tuned for 1 epoch on a maximum length of 256 tokens. The training took approximately 6 hours to complete.
31
 
32
  ## Evaluation Metrics
33
+ The model achieved an f1 score of 93% and an accuracy of 93% on the CNN test dataset with a maximum length of 128 tokens.
34
 
35
  # Usage
36
  The model can be used for zero-shot text classification tasks on news articles. It can be accessed via the Huggingface Transformers library using the following code: