File size: 4,215 Bytes
ea1f972
 
11a880b
 
 
 
6670f16
11a880b
6670f16
 
6d6e3cb
c3872b2
 
 
 
 
 
 
ea1f972
0dc2d91
c3872b2
ba1318a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0915c7
 
ba1318a
 
 
 
c3872b2
ba1318a
 
 
 
 
 
 
 
 
 
 
 
d13a1e8
ba1318a
 
425b89c
 
 
 
 
 
7dd2ec8
 
ba1318a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: cc-by-4.0
language: 
- pl
datasets:
- Wikipedia
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
widget:
- source_sentence: "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
  sentences:
    - "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
    - "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
    - "Samica o długości ciała 10–11 mm, szczoteczki na tylnych nogach służące do zbierania pyłku oraz włoski na końcu odwłoka jaskrawo pomarańczowoczerwone. "
  example_title: "Uczenie maszynowe"
---
<img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>

# SHerbert - Polish SentenceBERT
SentenceBERT is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Training was based on the original paper [Siamese BERT models for the task of semantic textual similarity (STS)](https://arxiv.org/abs/1908.10084) with a slight modification of how the training data was used. The goal of the model is to generate different embeddings based on the semantic and topic similarity of the given text.

> Semantic textual similarity analyzes how similar two pieces of texts are.

Read more about how the model was prepared in our [blog post](https://voicelab.ai/blog/).

The base trained model is a Polish HerBERT. HerBERT is a BERT-based Language Model. For more details, please refer to: "HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish".

# Corpus
Te model was trained solely on [Wikipedia](https://dumps.wikimedia.org/).


# Tokenizer

As in the original HerBERT implementation, the training dataset was tokenized into subwords using a character level byte-pair encoding (CharBPETokenizer) with a vocabulary size of 50k tokens. The tokenizer itself was trained with a tokenizers library. 

We kindly encourage you to use the Fast version of the tokenizer, namely HerbertTokenizerFast.

# Usage

 ```python
from transformers import AutoTokenizer, AutoModel
from sklearn.metrics import pairwise

sbert = AutoModel.from_pretrained("Voicelab/sbert-base-cased-pl")
tokenizer = AutoTokenizer.from_pretrained("Voicelab/sbert-base-cased-pl")

s0 = "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
s1 = "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
s2 = "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "


tokens = tokenizer([s0, s1, s2], 
                    padding=True, 
                    truncation=True,
                    return_tensors='pt')
x = sbert(tokens["input_ids"],
            tokens["attention_mask"]).pooler_output

# similarity between sentences s0 and s1
print(pairwise.cosine_similarity(x[0], x[1])) # Result: 0.7952354

# similarity between sentences s0 and s2
print(pairwise.cosine_similarity(x[0], x[2])) # Result: 0.42359722
    
 ```
# Results

| Model                    | Accuracy   | Source                                                  |
|--------------------------|------------|---------------------------------------------------------|
| SBERT-WikiSec-base (EN)  | 80.42%     | https://arxiv.org/abs/1908.10084                        |
| SBERT-WikiSec-large (EN) | 80.78%     | https://arxiv.org/abs/1908.10084                        |
| **sbert-base-cased-pl**   | **82.31%** | **https://huggingface.co/Voicelab/sbert-base-cased-pl** |
| sbert-large-cased-pl      | 84.42%     | https://huggingface.co/Voicelab/sbert-large-cased-pl    |

# License

CC BY 4.0

# Citation

If you use this model, please cite the following paper:


# Authors

The model was trained by NLP Research Team at Voicelab.ai.

You can contact us [here](https://voicelab.ai/contact/).