jegormeister commited on
Commit
f5427c2
1 Parent(s): e322a0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -10
README.md CHANGED
@@ -1,17 +1,21 @@
1
  ---
 
2
  pipeline_tag: sentence-similarity
3
  tags:
4
  - sentence-transformers
5
  - feature-extraction
6
  - sentence-similarity
7
  - transformers
 
 
 
8
  ---
9
 
10
  # jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned
11
 
12
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
14
- <!--- Describe your model here -->
15
 
16
  ## Usage (Sentence-Transformers)
17
 
@@ -70,15 +74,6 @@ print("Sentence embeddings:")
70
  print(sentence_embeddings)
71
  ```
72
 
73
-
74
-
75
- ## Evaluation Results
76
-
77
- <!--- Describe how your model was evaluated -->
78
-
79
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned)
80
-
81
-
82
  ## Training
83
  The model was trained with the parameters:
84
 
 
1
  ---
2
+ language: nl
3
  pipeline_tag: sentence-similarity
4
  tags:
5
  - sentence-transformers
6
  - feature-extraction
7
  - sentence-similarity
8
  - transformers
9
+ - robbert
10
+ datasets:
11
+ - clips/mqa
12
  ---
13
 
14
  # jegorkitskerkin/robbert-v2-dutch-base-mqa-finetuned
15
 
16
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
17
 
18
+ This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). It was fine-tuned on 1,000,000 rows of Dutch FAQ question-answer pairs from [clips/mqa](https://huggingface.co/datasets/clips/mqa).
19
 
20
  ## Usage (Sentence-Transformers)
21
 
 
74
  print(sentence_embeddings)
75
  ```
76
 
 
 
 
 
 
 
 
 
 
77
  ## Training
78
  The model was trained with the parameters:
79