nreimers commited on
Commit
0d3f20e
1 Parent(s): 45a65f3
Files changed (2) hide show
  1. README.md +32 -5
  2. config.json +6 -6
README.md CHANGED
@@ -1,16 +1,30 @@
1
- # Cross-Encoder for Quora Duplicate Questions Detection
 
 
 
 
 
 
 
 
 
 
 
 
2
  This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
3
 
4
  ## Training Data
5
  The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
6
 
 
 
7
 
8
  ## Usage
9
 
10
  Pre-trained models can be used like this:
11
  ```python
12
  from sentence_transformers import CrossEncoder
13
- model = CrossEncoder('model_name')
14
  scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
15
 
16
  #Convert scores to labels
@@ -24,8 +38,8 @@ You can use the model also directly with Transformers library (without SentenceT
24
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
25
  import torch
26
 
27
- model = AutoModelForSequenceClassification.from_pretrained('model_name')
28
- tokenizer = AutoTokenizer.from_pretrained('model_name')
29
 
30
  features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
31
 
@@ -35,4 +49,17 @@ with torch.no_grad():
35
  label_mapping = ['contradiction', 'entailment', 'neutral']
36
  labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
37
  print(labels)
38
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ pipeline_tag: zero-shot-classification
4
+ tags:
5
+ - deberta-base-base
6
+ datasets:
7
+ - multi_nli
8
+ - snli
9
+ metrics:
10
+ - accuracy
11
+ ---
12
+
13
+ # Cross-Encoder for Natural Language Inference
14
  This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
15
 
16
  ## Training Data
17
  The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
18
 
19
+ ## Performance
20
+ For evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
21
 
22
  ## Usage
23
 
24
  Pre-trained models can be used like this:
25
  ```python
26
  from sentence_transformers import CrossEncoder
27
+ model = CrossEncoder('cross-encoder/nli-deberta-base')
28
  scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
29
 
30
  #Convert scores to labels
38
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
39
  import torch
40
 
41
+ model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-base')
42
+ tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-base')
43
 
44
  features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
45
 
49
  label_mapping = ['contradiction', 'entailment', 'neutral']
50
  labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
51
  print(labels)
52
+ ```
53
+
54
+ ## Zero-Shot Classification
55
+ This model can also be used for zero-shot-classification:
56
+ ```python
57
+ from transformers import pipeline
58
+
59
+ classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-base')
60
+
61
+ sent = "Apple just announced the newest iPhone X"
62
+ candidate_labels = ["technology", "sports", "politics"]
63
+ res = classifier(sent, candidate_labels)
64
+ print(res)
65
+ ```
config.json CHANGED
@@ -8,16 +8,16 @@
8
  "hidden_dropout_prob": 0.1,
9
  "hidden_size": 768,
10
  "id2label": {
11
- "0": "LABEL_0",
12
- "1": "LABEL_1",
13
- "2": "LABEL_2"
14
  },
15
  "initializer_range": 0.02,
16
  "intermediate_size": 3072,
17
  "label2id": {
18
- "LABEL_0": 0,
19
- "LABEL_1": 1,
20
- "LABEL_2": 2
21
  },
22
  "layer_norm_eps": 1e-07,
23
  "max_position_embeddings": 512,
8
  "hidden_dropout_prob": 0.1,
9
  "hidden_size": 768,
10
  "id2label": {
11
+ "0": "contradiction",
12
+ "1": "entailment",
13
+ "2": "neutral"
14
  },
15
  "initializer_range": 0.02,
16
  "intermediate_size": 3072,
17
  "label2id": {
18
+ "contradiction": 0,
19
+ "entailment": 1,
20
+ "neutral": 2
21
  },
22
  "layer_norm_eps": 1e-07,
23
  "max_position_embeddings": 512,