piotr-rybak's picture
Update README.md
3a32c99
|
raw
history blame
No virus
4.05 kB
---
pipeline_tag: sentence-similarity
language:
- pl
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- ipipan/polqa
- ipipan/maupqa
---
# Silver Retriever Base (v1)
Silver Retriever model encodes the Polish sentences or paragraphs into a 768-dimensional dense vector space and can be used for tasks like document retrieval or semantic search.
It was initialized from the [HerBERT-base](https://huggingface.co/allegro/herbert-base-cased) model and fine-tuned on the [PolQA](https://huggingface.co/ipipan/polqa) and [MAUPQA](https://huggingface.co/ipipan/maupqa) datasets for 15,000 steps with a batch size of 1,024.
## Preparing inputs
The model was trained on question-passage pairs and works best when the input is the same format as that used during training:
- We added the phrase `Pytanie:' to the beginning of the question.
- The training passages consisted of `title` and `text` concatenated with the special token `</s>`. Even if your passages don't have a `title`, it is still beneficial to prefix a passage with the `</s>` token.
- Although we used the dot product during training, the model usually works better with the cosine distance.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [
"Pytanie: W jakim mieście urodził się Zbigniew Herbert?",
"Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg.",
]
model = SentenceTransformer('ipipan/silver-retriever-base-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = [
"Pytanie: W jakim mieście urodził się Zbigniew Herbert?",
"Zbigniew Herbert</s>Zbigniew Bolesław Ryszard Herbert (ur. 29 października 1924 we Lwowie, zm. 28 lipca 1998 w Warszawie) – polski poeta, eseista i dramaturg.",
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ipipan/silver-retriever-base-v1')
model = AutoModel.from_pretrained('ipipan/silver-retriever-base-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Additional Information
### Model Creators
The model was created by Piotr Rybak from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]