Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,49 @@ widget:
|
|
8 |
hypothesis_template: "This is {}."
|
9 |
---
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
hypothesis_template: "This is {}."
|
9 |
---
|
10 |
|
11 |
+
# Fb_improved_zeroshot
|
12 |
+
|
13 |
+
Zero-Shot Model designed to classify academic search logs in German and English. Developed by students at ETH Zürich.
|
14 |
+
|
15 |
+
This model was trained using the [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli/) checkpoint provided by Meta on Huggingface. It was then fine-tuned to suit the needs of this project.
|
16 |
+
|
17 |
+
## NLI-based Zero-Shot Text Classification
|
18 |
+
|
19 |
+
This method is based on Natural Language Inference (NLI), see [Yin et al.](https://arxiv.org/abs/1909.00161).
|
20 |
+
The following tutorials are taken from the model card of [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli/).
|
21 |
+
|
22 |
+
#### With the zero-shot classification pipeline
|
23 |
+
The model can be loaded with the `zero-shot-classification` pipeline like so:
|
24 |
+
```python
|
25 |
+
from transformers import pipeline
|
26 |
+
classifier = pipeline("zero-shot-classification",
|
27 |
+
model="oigele/Fb_improved_zeroshot")
|
28 |
+
```
|
29 |
+
You can then use this pipeline to classify sequences into any of the class names you specify.
|
30 |
+
```python
|
31 |
+
sequence_to_classify = "natural language processing"
|
32 |
+
candidate_labels = ['Location & Address', 'Employment', 'Organizational', 'Name', 'Service', 'Studies', 'Science']
|
33 |
+
classifier(sequence_to_classify, candidate_labels)
|
34 |
+
```
|
35 |
+
If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently:
|
36 |
+
```python
|
37 |
+
candidate_labels = ['Location & Address', 'Employment', 'Organizational', 'Name', 'Service', 'Studies', 'Science']
|
38 |
+
classifier(sequence_to_classify, candidate_labels, multi_class=True)
|
39 |
+
```
|
40 |
+
#### With manual PyTorch
|
41 |
+
```python
|
42 |
+
# pose sequence as a NLI premise and label as a hypothesis
|
43 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
44 |
+
nli_model = AutoModelForSequenceClassification.from_pretrained('oigele/Fb_improved_zeroshot/')
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
|
46 |
+
premise = sequence
|
47 |
+
hypothesis = f'This is {label}.'
|
48 |
+
# run through model pre-trained on MNLI
|
49 |
+
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
|
50 |
+
truncation_strategy='only_first')
|
51 |
+
logits = nli_model(x.to(device))[0]
|
52 |
+
# we throw away "neutral" (dim 1) and take the probability of
|
53 |
+
# "entailment" (2) as the probability of the label being true
|
54 |
+
entail_contradiction_logits = logits[:,[0,2]]
|
55 |
+
probs = entail_contradiction_logits.softmax(dim=1)
|
56 |
+
prob_label_is_true = probs[:,1]
|