toniopelo commited on
Commit
e68b583
1 Parent(s): a6a2043

Update model card as it was on the original model

Browse files

Here is a copy/paste (with some effort to keep formatting) of the [web archive page of the BaptisteDoyen/camembert-base-xnli model](https://web.archive.org/web/20230620234933/https://huggingface.co/BaptisteDoyen/camembert-base-xnli). I had to search for it, so I put it here so it can help others.

Files changed (1) hide show
  1. README.md +92 -1
README.md CHANGED
@@ -1,4 +1,95 @@
1
  ---
2
  pipeline_tag: zero-shot-classification
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- A copy of the original BaptisteDoyen/camembert-base-xnli as it gives a 404 error right now.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  pipeline_tag: zero-shot-classification
3
+ license: mit
4
+ datasets:
5
+ - xnli
6
+ language:
7
+ - fr
8
+ tags:
9
+ - camembert
10
+ - text-classification
11
+ - nli
12
+ - xnli
13
  ---
14
+ This is a copy of the original BaptisteDoyen/camembert-base-xnli model as it gives a 404 error right now.\
15
+ Here is the model card as it was on BaptisteDoyen/camembert-base-xnli page.
16
+
17
+ # camembert-base-xnli
18
+
19
+ ## Model description
20
+
21
+ Camembert-base model fine-tuned on french part of XNLI dataset.
22
+ One of the few Zero-Shot classification model working on French 🇫🇷
23
+
24
+ ## Intended uses & limitations
25
+
26
+ #### How to use
27
+
28
+ Two different usages :
29
+
30
+ - As a Zero-Shot sequence classifier :
31
+ ```
32
+ classifier = pipeline("zero-shot-classification",
33
+ model="BaptisteDoyen/camembert-base-xnli")
34
+
35
+ sequence = "L'équipe de France joue aujourd'hui au Parc des Princes"
36
+ candidate_labels = ["sport","politique","science"]
37
+ hypothesis_template = "Ce texte parle de {}."
38
+
39
+ classifier(sequence, candidate_labels, hypothesis_template=hypothesis_template)
40
+ # outputs :
41
+ # {'sequence': "L'équipe de France joue aujourd'hui au Parc des Princes",
42
+ # 'labels': ['sport', 'politique', 'science'],
43
+ # 'scores': [0.8595073223114014, 0.10821866989135742, 0.0322740375995636]}
44
+ ```
45
+ - As a premise/hypothesis checker :
46
+ The idea is here to compute a probability of the form P(premise∣hypothesis) P(premise|hypothesis ) P(premise∣hypothesis)
47
+
48
+ ```
49
+ # load model and tokenizer
50
+ nli_model = AutoModelForSequenceClassification.from_pretrained("BaptisteDoyen/camembert-base-xnli")
51
+ tokenizer = AutoTokenizer.from_pretrained("BaptisteDoyen/camembert-base-xnli")
52
+ # sequences
53
+ premise = "le score pour les bleus est élevé"
54
+ hypothesis = "L'équipe de France a fait un bon match"
55
+ # tokenize and run through model
56
+ x = tokenizer.encode(premise, hypothesis, return_tensors='pt')
57
+ logits = nli_model(x)[0]
58
+ # we throw away "neutral" (dim 1) and take the probability of
59
+ # "entailment" (0) as the probability of the label being true
60
+ entail_contradiction_logits = logits[:,::2]
61
+ probs = entail_contradiction_logits.softmax(dim=1)
62
+ prob_label_is_true = probs[:,0]
63
+ prob_label_is_true[0].tolist() * 100
64
+ # outputs
65
+ # 86.40775084495544
66
+ ```
67
+
68
+ ## Training data
69
+
70
+ Training data is the french fold of the [XNLI](https://research.fb.com/publications/xnli-evaluating-cross-lingual-sentence-representations/) dataset released in 2018 by Facebook.
71
+ Available with great ease using the datasets library :
72
+
73
+ ```
74
+ from datasets import load_dataset
75
+ dataset = load_dataset('xnli', 'fr')
76
+ ```
77
+
78
+ ## Training/Fine-Tuning procedure
79
+
80
+ Training procedure is here pretty basic and was performed on the cloud using a single GPU.
81
+ Main training parameters :
82
+
83
+ - `lr = 2e-5 with lr_scheduler_type = "linear"`
84
+ - `num_train_epochs = 4`
85
+ - `batch_size = 12 (limited by GPU-memory)`
86
+ - `weight_decay = 0.01`
87
+ - `metric_for_best_model = "eval_accuracy"`
88
+
89
+ ## Eval results
90
+
91
+ We obtain the following results on validation and test sets:
92
+ Set|Accuracy
93
+ ---|---
94
+ validation|81.4
95
+ test|81.7