Update README.md
Browse files
README.md
CHANGED
@@ -21,3 +21,27 @@ between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR
|
|
21 |
It was finetuned on private NLI Bulgarian data.
|
22 |
|
23 |
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
It was finetuned on private NLI Bulgarian data.
|
22 |
|
23 |
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
|
24 |
+
|
25 |
+
### How to use
|
26 |
+
|
27 |
+
Here is how to use this model in PyTorch:
|
28 |
+
|
29 |
+
```python
|
30 |
+
>>> import torch
|
31 |
+
>>> from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
32 |
+
>>>
|
33 |
+
>>> model_id = 'rmihaylov/bert-base-nli-theseus-bg'
|
34 |
+
>>> model = AutoModelForSequenceClassification.from_pretrained(model_id)
|
35 |
+
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
|
36 |
+
>>>
|
37 |
+
>>> inputs = tokenizer.encode_plus(
|
38 |
+
>>> 'Няколко момчета играят футбол.',
|
39 |
+
>>> 'Няколко момичета играят футбол.',
|
40 |
+
>>> return_tensors='pt')
|
41 |
+
>>>
|
42 |
+
>>> outputs = model(**inputs)
|
43 |
+
>>> contradiction, entailment, neutral = torch.softmax(outputs[0][0], dim=0).detach()
|
44 |
+
>>> contradiction, neutral, entailment
|
45 |
+
|
46 |
+
(tensor(0.9998), tensor(0.0001), tensor(5.9929e-05))
|
47 |
+
```
|