Update README.md
Browse files
README.md
CHANGED
@@ -53,15 +53,17 @@ Zero-shot classification
|
|
53 |
The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by:
|
54 |
$$P(hypothesis=c|premise)=\frac{e^{P(premise=entailment\vert hypothesis\; c)}}{\sum_{i\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis\; i)}}$$
|
55 |
|
56 |
-
For this part, we use 2 datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used
|
57 |
|
58 |
-
| **
|
59 |
| :--------------: | :-----------: | :------------: |
|
60 |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **205.54** | 63.71 |
|
61 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **73.74** |
|
62 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 70.05 |
|
63 |
|
64 |
-
|
|
|
|
|
65 |
| :--------------: | :-----------: | :------------: |
|
66 |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **261.99** | 60.12 |
|
67 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 499.45 | **60.14** |
|
|
|
53 |
The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by:
|
54 |
$$P(hypothesis=c|premise)=\frac{e^{P(premise=entailment\vert hypothesis\; c)}}{\sum_{i\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis\; i)}}$$
|
55 |
|
56 |
+
For this part, we use 2 datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset is composed of 2 classes: "positif" and "négatif" appreciation of movies reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels.
|
57 |
|
58 |
+
| **[allocine](https://huggingface.co/datasets/allocine)** | **time (ms)** | **MCC (x100)** |
|
59 |
| :--------------: | :-----------: | :------------: |
|
60 |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **205.54** | 63.71 |
|
61 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **73.74** |
|
62 |
| [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 70.05 |
|
63 |
|
64 |
+
The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. We use the articles summary part to predict their topics. In this aim, we aggregate sub-topics and select a few of them. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport", "technologie" and "science".
|
65 |
+
|
66 |
+
| **[mlsum](https://huggingface.co/datasets/mlsum)** | **time (ms)** | **MCC (x100)** |
|
67 |
| :--------------: | :-----------: | :------------: |
|
68 |
| [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **261.99** | 60.12 |
|
69 |
| [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 499.45 | **60.14** |
|