File size: 2,216 Bytes
2647bbf 3ad52de 2647bbf d205378 2647bbf 9c331b0 3ad52de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
license: mit
datasets:
- squad
- eli5
- sentence-transformers/embedding-training-data
- KennethTM/squad_pairs_danish
- KennethTM/eli5_question_answer_danish
language:
- da
---
*New version available, trained on more data and otherwise identical [KennethTM/MiniLM-L6-danish-reranker-v2](https://huggingface.co/KennethTM/MiniLM-L6-danish-reranker-v2)*
# MiniLM-L6-danish-reranker
This is a lightweight (~22 M parameters) [sentence-transformers](https://www.SBERT.net) model for Danish NLP: It takes two sentences as input and outputs a relevance score. Therefore, the model can be used for information retrieval, e.g. given a query and candidate matches, rank the candidates by their relevance.
The maximum sequence length is 512 tokens (for both passages).
The model was not pre-trained from scratch but adapted from the English version of [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) with a [Danish tokenizer](https://huggingface.co/KennethTM/bert-base-uncased-danish).
Trained on ELI5 and SQUAD data machine translated from English to Danish.
## Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('KennethTM/MiniLM-L6-danish-reranker')
tokenizer = AutoTokenizer.from_pretrained('KennethTM/MiniLM-L6-danish-reranker')
features = tokenizer(['Kører der cykler på vejen?', 'Kører der cykler på vejen?'], ['En panda løber på vejen.', 'En mand kører hurtigt forbi på cykel.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
```
## Usage with SentenceTransformers
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('KennethTM/MiniLM-L6-danish-reranker', max_length=512)
scores = model.predict([('Kører der cykler på vejen?', 'En panda løber på vejen.'), ('Kører der cykler på vejen?', 'En mand kører hurtigt forbi på cykel.')])
``` |